Updates from: 11/02/2022 02:11:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
GET https://graph.microsoft.com/v1.0/users?$filter=startswith(certificateUserIds
GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds eq 'bob@contoso.com' ```
+## Update certificate user IDs using Microsoft Graph queries
+PATCH the user object certificateUserIds value for a given userId
+
+#### Request body:
+
+```http
+PATCH https://graph.microsoft.us/v1.0/users/{id}
+Content-Type: application/json
+{
+
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users(authorizationInfo,department)/$entity",
+ "department": "Accounting",
+ "authorizationInfo": {
+ "certificateUserIds": [
+ "X509:<PN>123456789098765@mil"
+ ]
+ }
+}
+```
++ ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
Previously updated : 06/20/2022 Last updated : 11/01/2022
Modern authentication is supported for the Microsoft Office 2013 clients and lat
This article shows you how to use app passwords for legacy applications that don't support multi-factor authentication prompts. >[!NOTE]
-> App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication.
+> App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication. App passwords only work with legacy authentication protocols such as IMAP and SMTP.
## Overview and considerations
active-directory Howto Mfaserver Dir Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ldap.md
Title: LDAP Authentication and Azure MFA Server - Azure Active Directory
+ Title: LDAP Authentication and Azure Multi-Factor Authentication Server - Azure Active Directory
description: Deploying LDAP Authentication and Azure Multi-Factor Authentication Server. Previously updated : 07/11/2018 Last updated : 10/31/2022
# LDAP authentication and Azure Multi-Factor Authentication Server
-By default, the Azure Multi-Factor Authentication Server is configured to import or synchronize users from Active Directory. However, it can be configured to bind to different LDAP directories, such as an ADAM directory, or specific Active Directory domain controller. When connected to a directory via LDAP, the Azure Multi-Factor Authentication Server can act as an LDAP proxy to perform authentications. It also allows for the use of LDAP bind as a RADIUS target, for pre-authentication of users with IIS Authentication, or for primary authentication in the Azure MFA user portal.
+By default, the Azure Multi-Factor Authentication Server is configured to import or synchronize users from Active Directory. However, it can be configured to bind to different LDAP directories, such as an ADAM directory, or specific Active Directory domain controller. When connected to a directory via LDAP, the Azure Multi-Factor Authentication Server can act as an LDAP proxy to perform authentications. Azure Multi-Factor Authentication Server can also use LDAP bind as a RADIUS target to pre-authenticate IIS users, or for primary authentication in the Azure Multi-Factor Authentication user portal.
To use Azure Multi-Factor Authentication as an LDAP proxy, insert the Azure Multi-Factor Authentication Server between the LDAP client (for example, VPN appliance, application) and the LDAP directory server. The Azure Multi-Factor Authentication Server must be configured to communicate with both the client servers and the LDAP directory. In this configuration, the Azure Multi-Factor Authentication Server accepts LDAP requests from client servers and applications and forwards them to the target LDAP directory server to validate the primary credentials. If the LDAP directory validates the primary credentials, Azure Multi-Factor Authentication performs a second identity verification and sends a response back to the LDAP client. The entire authentication succeeds only if both the LDAP server authentication and the second-step verification succeed. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
->
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
## Configure LDAP authentication
To configure LDAP authentication, install the Azure Multi-Factor Authentication
4. If you plan to use LDAPS from the client to the Azure Multi-Factor Authentication Server, an TLS/SSL certificate must be installed on the same server as MFA Server. Click **Browse** next to the SSL (TLS) certificate box, and select a certificate to use for the secure connection. 5. Click **Add**. 6. In the Add LDAP Client dialog box, enter the IP address of the appliance, server, or application that authenticates to the Server and an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages.
-7. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users have not yet been imported into the Server and/or are exempt from two-step verification, leave the box unchecked. See the MFA Server help file for additional information on this feature.
+7. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users haven't yet been imported into the Server and/or are exempt from two-step verification, leave the box unchecked. See the MFA Server help file for additional information on this feature.
-Repeat these steps to add additional LDAP clients.
+Repeat these steps to add more LDAP clients.
### Configure the LDAP directory connection
When the Azure Multi-Factor Authentication is configured to receive LDAP authent
12. Click the **Company Settings** icon and select the **Username Resolution** tab. 13. If you're connecting to Active Directory from a domain-joined server, leave the **Use Windows security identifiers (SIDs) for matching usernames** radio button selected. Otherwise, select the **Use LDAP unique identifier attribute for matching usernames** radio button.
-When the **Use LDAP unique identifier attribute for matching usernames** radio button is selected, the Azure Multi-Factor Authentication Server attempts to resolve each username to a unique identifier in the LDAP directory. An LDAP search is performed on the Username attributes defined in the Directory Integration -> Attributes tab. When a user authenticates, the username is resolved to the unique identifier in the LDAP directory. The unique identifier is used for matching the user in the Azure Multi-Factor Authentication data file. This allows for case-insensitive comparisons, and long and short username formats.
+When the **Use LDAP unique identifier attribute for matching usernames** radio button is selected, the Azure Multi-Factor Authentication Server attempts to resolve each username to a unique identifier in the LDAP directory. An LDAP search is performed on the Username attributes defined in the Directory Integration > Attributes tab. When a user authenticates, the username is resolved to the unique identifier in the LDAP directory. The unique identifier is used for matching the user in the Azure Multi-Factor Authentication data file. This allows for case-insensitive comparisons, and long and short username formats.
After you complete these steps, the MFA Server listens on the configured ports for LDAP access requests from the configured clients, and acts as a proxy for those requests to the LDAP directory for authentication.
After you complete these steps, the MFA Server listens on the configured ports f
To configure the LDAP client, use the guidelines:
-* Configure your appliance, server, or application to authenticate via LDAP to the Azure Multi-Factor Authentication Server as though it were your LDAP directory. Use the same settings that you would normally use to connect directly to your LDAP directory, except for the server name or IP address, which will be that of the Azure Multi-Factor Authentication Server.
-* Configure the LDAP timeout to 30-60 seconds so that there is time to validate the user's credentials with the LDAP directory, perform the second-step verification, receive their response, and respond to the LDAP access request.
+* Configure your appliance, server, or application to authenticate via LDAP to the Azure Multi-Factor Authentication Server as though it were your LDAP directory. Use the same settings that you normally use to connect directly to your LDAP directory, but use the Azure Multi-Factor Authentication Server for the server name or IP address.
+* Configure the LDAP timeout to 30-60 seconds to provide enough time to validate the user's credentials with the LDAP directory, perform the second-step verification, receive their response, and respond to the LDAP access request.
* If using LDAPS, the appliance or server making the LDAP queries must trust the TLS/SSL certificate installed on the Azure Multi-Factor Authentication Server.
active-directory Howto Mfaserver Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-iis.md
Title: IIS Authentication and Azure MFA Server - Azure Active Directory
+ Title: IIS Authentication and Azure Multi-Factor Authentication Server - Azure Active Directory
description: Deploying IIS Authentication and Azure Multi-Factor Authentication Server. Previously updated : 07/11/2018 Last updated : 10/31/2022
# Configure Azure Multi-Factor Authentication Server for IIS web apps
-Use the IIS Authentication section of the Azure Multi-Factor Authentication (MFA) Server to enable and configure IIS authentication for integration with Microsoft IIS web applications. The Azure MFA Server installs a plug-in that can filter requests being made to the IIS web server to add Azure Multi-Factor Authentication. The IIS plug-in provides support for Form-Based Authentication and Integrated Windows HTTP Authentication. Trusted IPs can also be configured to exempt internal IP addresses from two-factor authentication.
+Use the IIS Authentication section of the Azure Multi-Factor Authentication (MFA) Server to enable and configure IIS authentication for integration with Microsoft IIS web applications. The Azure Multi-Factor Authentication Server installs a plug-in that can filter requests being made to the IIS web server to add Azure Multi-Factor Authentication. The IIS plug-in provides support for Form-Based Authentication and Integrated Windows HTTP Authentication. Trusted IPs can also be configured to exempt internal IP addresses from two-factor authentication.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
->
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
->
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+>>
> When you use cloud-based Azure Multi-Factor Authentication, there is no alternative to the IIS plugin provided by Azure Multi-Factor Authentication (MFA) Server. Instead, use Web Application Proxy (WAP) with Active Directory Federation Services (AD FS) or Azure Active Directory's Application Proxy. ![IIS Authentication in MFA Server](./media/howto-mfaserver-iis/iis.png)
To secure an IIS web application that uses form-based authentication, install th
2. Click the **Form-Based** tab. 3. Click **Add**. 4. To detect username, password and domain variables automatically, enter the Login URL (like `https://localhost/contoso/auth/login.aspx`) within the Auto-Configure Form-Based Website dialog box and click **OK**.
-5. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
-6. If the page variables cannot be detected automatically, click **Specify Manually** in the Auto-Configure Form-Based Website dialog box.
+5. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
+6. If the page variables can't be detected automatically, click **Specify Manually** in the Auto-Configure Form-Based Website dialog box.
7. In the Add Form-Based Website dialog box, enter the URL to the login page in the Submit URL field and enter an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages. 8. Select the correct Request format. This is set to **POST or GET** for most web applications. 9. Enter the Username variable, Password variable, and Domain variable (if it appears on the login page). To find the names of the input boxes, navigate to the login page in a web browser, right-click on the page, and select **View Source**.
-10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
+10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
11. Click **Advanced** to review advanced settings, including: - Select a custom denial page file
To secure an IIS web application that uses form-based authentication, install th
## Using integrated Windows authentication with Azure Multi-Factor Authentication Server
-To secure an IIS web application that uses Integrated Windows HTTP authentication, install the Azure MFA Server on the IIS web server, then configure the Server with the following steps:
+To secure an IIS web application that uses Integrated Windows HTTP authentication, install the Azure Multi-Factor Authentication Server on the IIS web server, then configure the Server with the following steps:
1. In the Azure Multi-Factor Authentication Server, click the IIS Authentication icon in the left menu. 2. Click the **HTTP** tab. 3. Click **Add**. 4. In the Add Base URL dialogue box, enter the URL for the website where HTTP authentication is performed (like `http://localhost/owa`) and provide an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages.
-5. Adjust the Idle timeout and Maximum session times if the default is not sufficient.
-6. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
+5. Adjust the Idle timeout and Maximum session times if the default isn't sufficient.
+6. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked.
7. Check the **Cookie cache** box if desired. 8. Click **OK**.
To secure an IIS web application that uses Integrated Windows HTTP authenticatio
After configuring the Form-Based or HTTP authentication URLs and settings, select the locations where the Azure Multi-Factor Authentication IIS plug-ins should be loaded and enabled in IIS. Use the following procedure:
-1. If running on IIS 6, click the **ISAPI** tab. Select the website that the web application is running under (e.g. Default Web Site) to enable the Azure Multi-Factor Authentication ISAPI filter plug-in for that site.
+1. If running on IIS 6, click the **ISAPI** tab. Select the website that the web application is running under (for example, Default Web Site) to enable the Azure Multi-Factor Authentication ISAPI filter plug-in for that site.
2. If running on IIS 7 or higher, click the **Native Module** tab. Select the server, websites, or applications to enable the IIS plug-in at the desired levels. 3. Click the **Enable IIS authentication** box at the top of the screen. Azure Multi-Factor Authentication is now securing the selected IIS application. Ensure that users have been imported into the Server. ## Trusted IPs
-The Trusted IPs allows users to bypass Azure Multi-Factor Authentication for website requests originating from specific IP addresses or subnets. For example, you may want to exempt users from Azure Multi-Factor Authentication while logging in from the office. For this, you would specify the office subnet as a Trusted IPs entry. To configure Trusted IPs, use the following procedure:
+The Trusted IPs allows users to bypass Azure Multi-Factor Authentication for website requests originating from specific IP addresses or subnets. For example, you may want to exempt users from Azure Multi-Factor Authentication while logging in from the office. In that case, you can specify the office subnet as a Trusted IPs entry. To configure Trusted IPs, use the following procedure:
1. In the IIS Authentication section, click the **Trusted IPs** tab. 2. Click **Add**.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
On Windows 7, iOS, Android, and macOS Azure AD identifies the device using a cli
#### Chrome support
-For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details.
+For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows Accounts](https://chrome.google.com/webstore/detail/windows-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details.
To automatically deploy this extension to Chrome browsers, create the following registry key:
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md)
## Next steps - Learn about [`id_tokens` in Azure AD](id-tokens.md).-- Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).
+- Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](permissions-consent-overview.md)).
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Your client app needs a way to trust the security tokens issued to it by the Mic
When you register your app in Azure AD, the Microsoft identity platform automatically assigns it some values, while others you configure based on the application's type.
-Two the most commonly referenced app registration settings are:
+Two of the most commonly referenced app registration settings are:
* **Application (client) ID** - Also called _application ID_ and _client ID_, this value is assigned to your app by the Microsoft identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues. * **Redirect URI** - The authorization server uses a redirect URI to direct the resource owner's *user-agent* (web browser, mobile app) to another destination after completing their interaction. For example, after the end-user authenticates with the authorization server. Not all client types use redirect URIs.
active-directory Application Consent Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-consent-experience.md
Title: Azure AD app consent experiences description: Learn more about the Azure AD consent experiences to see how you can use it when managing and developing applications on Azure AD -+ - Previously updated : 04/18/2022-- Last updated : 11/01/2022++
-# Understanding Azure AD application consent experiences
-
-Learn more about the Azure Active Directory (Azure AD) application consent user experience. So you can intelligently manage applications for your organization and/or develop applications with a more seamless consent experience.
+# Consent experience for applications in Azure Active Directory
-## Consent and permissions
+In this article, you'll learn about the Azure Active Directory (Azure AD) application consent user experience. You'll then be able to intelligently manage applications for your organization and/or develop applications with a more seamless consent experience.
Consent is the process of a user granting authorization to an application to access protected resources on their behalf. An admin or user can be asked for consent to allow access to their organization/individual data.
The following diagram and table provide information about the building blocks of
| 2 | Title | The title changes based on whether the users are going through the user or admin consent flow. In user consent flow, the title will be ΓÇ£Permissions requestedΓÇ¥ while in the admin consent flow the title will have an additional line ΓÇ£Accept for your organizationΓÇ¥. | | 3 | App logo | This image should help users have a visual cue of whether this app is the app they intended to access. This image is provided by application developers and the ownership of this image isn't validated. | | 4 | App name | This value should inform users which application is requesting access to their data. Note this name is provided by the developers and the ownership of this app name isn't validated.|
-| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app is not publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. |
+| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app isn't publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. |
| 6 | Microsoft 365 Certification | The Microsoft 365 Certification logo means that an app has been vetted against controls derived from leading industry standard frameworks, and that strong security and compliance practices are in place to protect customer data. For more information, read about [Microsoft 365 Certification](/microsoft-365-app-certification/docs/enterprise-app-certification-guide).| | 7 | Publisher information | Displays whether the application is published by Microsoft. |
-| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it is best to request access, to the permissions with the least privilege. |
+| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it's best to request access, to the permissions with the least privilege. |
| 9 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. | | 10 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. | | 11 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. |
-## App requires a permission within the user's scope of authority
+## Common scenarios and consent experiences
-A common consent scenario is that the user accesses an app which requires a permission set that is within the user's scope of authority. The user is directed to the user consent flow.
+The following section describes the common scenarios and the expected consent experience for each of them.
+### App requires a permission that the user has the right to grant
-Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. As of today, this check box will only show for the Global Admin role, so Cloud Admin and App Admin will not see this checkbox.
+In this consent scenario, the user accesses an app that requires a permission set that is within the user's scope of authority. The user is directed to the user consent flow.
+
+Admins will see an additional control on the traditional consent prompt that will allow to give consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. The check box will only show for the Global Admin role, so Cloud Admin and App Admin won't see this checkbox.
:::image type="content" source="./media/application-consent-experience/consent_prompt_1a.png" alt-text="Consent prompt for scenario 1a":::
Users will see the traditional consent prompt.
:::image type="content" source="./media/application-consent-experience/consent_prompt_1b.png" alt-text="Screenshot that shows the traditional consent prompt.":::
-## App requires a permission outside of the user's scope of authority
+### App requires a permission that the user has no right to grant
-Another common consent scenario is that the user accesses an app which requires at least one permission that is outside the user's scope of authority.
+In this consent scenario, the user accesses an app that requires at least one permission that is outside the user's scope of authority.
Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. :::image type="content" source="./media/application-consent-experience/consent_prompt_1a.png" alt-text="Consent prompt for scenario 1a":::
-Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app.
+Non-admin users will be blocked from granting consent to the application, and they'll be told to ask their admin for access to the app. If admin consent workflow is enabled in the user's tenant, non-admin users are able to submit a request for admin approval from the consent prompt. For more information on admin consent workflow, see [Admin consent workflow](../manage-apps/admin-consent-workflow-overview.md).
:::image type="content" source="./media/application-consent-experience/consent_prompt_2b.png" alt-text="Screenshot of the consent prompt telling the user to ask an admin for access to the app.":::
-## User is directed to the admin consent flow
+### User is directed to the admin consent flow
-Another common scenario is when the user navigates to or is directed to the admin consent flow.
+In this consent scenario, the user navigates to or is directed to the admin consent flow.
Admin users will see the admin consent prompt. The title and the permission descriptions changed on this prompt, the changes highlight the fact that accepting this prompt will grant the app access to the requested data on behalf of the entire tenant. :::image type="content" source="./media/application-consent-experience/consent_prompt_3a.png" alt-text="Consent prompt for scenario 3a":::
-Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app.
+Non-admin users will be blocked from granting consent to the application, and they'll be told to ask their admin for access to the app.
:::image type="content" source="./media/application-consent-experience/consent_prompt_2b.png" alt-text="Screenshot of the consent prompt telling the user to ask an admin for access to the app.":::
+### Admin consent through the Azure portal
+
+In this scenario, an administrator consents to all of the permissions that an application requests, which can include delegated permissions on behalf of all users in the tenant. The Administrator grants consent through the **API permissions** page of the application registration in the [Azure portal](https://portal.azure.com).
+
+ :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Screenshot of explicit admin consent through the Azure portal." lightbox="./media/consent-framework/grant-consent.png":::
+
+All users in that tenant won't see the consent dialog unless the application requires new permissions. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+
+ > [!IMPORTANT]
+ > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested.
+
+## Common Issues
+This section outlines the common issues with the consent experience and possible troubleshooting tips.
+
+- 403 error
+
+ - Is this a [delegated scenario](permissions-consent-overview.md)? What permissions does a user have?
+ - Are necessary permissions added to use the endpoint?
+ - Check the [token](https://jwt.ms/) to see if it has necessary claims to call the endpoint.
+ - What permissions have been consented to? Who consented?
+
+- User is unable to consent
+
+ - Check if tenant admin has disabled user consent for your organization
+ - Confirm if the permissions you requesting are admin-restricted permissions.
+
+- User is still blocked even after admin has consented
+
+ - Check if [static permissions](consent-types-developer.md) are configured to be a superset of permissions requested dynamically.
+ - Check if user assignment is required for the app.
+
+## Troubleshoot known errors
+
+For troubleshooting steps, see [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md).
## Next steps - Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md).
active-directory Consent Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md
- Title: Microsoft identity platform consent framework
-description: Learn about the consent framework in the Microsoft identity platform and how it applies to multi-tenant applications.
-------- Previously updated : 03/29/2022-----
-# Microsoft identity platform consent framework
-
-Multi-tenant applications allow sign-ins by user accounts from Azure AD tenants other than the tenant in which the app was initially registered. The Microsoft identity platform consent framework enables a tenant administrator or user in these other tenants to consent to (or deny) an application's request for permission to access their resources.
-
-For example, perhaps a web application requires read-only access to a user's calendar in Microsoft 365. It's the identity platform's consent framework that enables the prompt asking the user to consent to the app's request for permission to read their calendar. If the user consents, the application is able to call the Microsoft Graph API on their behalf and get their calendar data.
-
-## Consent experience - an example
-
-The following steps show you how the consent experience works for both the application developer and the user.
-
-1. Assume you have a web client application that needs to request specific permissions to access a resource/API. You'll learn how to do this configuration in the next section, but essentially the Azure portal is used to declare permission requests at configuration time. Like other configuration settings, they become part of the application's Azure AD registration:
-
- :::image type="content" source="./media/consent-framework/permissions.png" alt-text="Permissions to other applications" lightbox="./media/consent-framework/permissions.png":::
-
-1. Consider that your applicationΓÇÖs permissions have been updated, the application is running, and a user is about to use it for the first time. First, the application needs to obtain an authorization code from Azure ADΓÇÖs `/authorize` endpoint. The authorization code can then be used to acquire a new access and refresh token.
-
-1. If the user is not already authenticated, Azure AD's `/authorize` endpoint prompts the user to sign in.
-
- :::image type="content" source="./media/consent-framework/usersignin.png" alt-text="User or administrator sign in to Azure AD":::
-
-1. After the user has signed in, Azure AD will determine if the user needs to be shown a consent page. This determination is based on whether the user (or their organizationΓÇÖs administrator) has already granted the application consent. If consent has not already been granted, Azure AD prompts the user for consent and displays the required permissions it needs to function. The set of permissions that are displayed in the consent dialog match the ones selected in the **Delegated permissions** in the Azure portal.
-
- :::image type="content" source="./media/consent-framework/consent.png" alt-text="Shows an example of permissions displayed in the consent dialog":::
-
-1. After the user grants consent, an authorization code is returned to your application, which is redeemed to acquire an access token and refresh token. For more information about this flow, see [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
-
-1. As an administrator, you can also consent to an application's delegated permissions on behalf of all the users in your tenant. Administrative consent prevents the consent dialog from appearing for every user in the tenant, and can be done in the [Azure portal](https://portal.azure.com) by users with the administrator role. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
-
- **To consent to an app's delegated permissions**
-
- 1. Go to the **API permissions** page for your application
- 1. Click on the **Grant admin consent** button.
-
- :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Grant permissions for explicit admin consent" lightbox="./media/consent-framework/grant-consent.png":::
-
- > [!IMPORTANT]
- > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested.
-
-## Next steps
-
-See [how to convert an app to multi-tenant](howto-convert-app-to-be-multi-tenant.md)
active-directory Consent Types Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-types-developer.md
+
+ Title: Microsoft identity platform developers' guide to requesting permissions through consent
+description: Learn how developers can request for permissions through consent in the Microsoft identity platform endpoint.
++++++++ Last updated : 11/01/2022+++
+# Requesting permissions through consent
++
+Applications in the Microsoft identity platform rely on consent in order to gain access to necessary resources or APIs. Different types of consent are better for different application scenarios. Choosing the best approach to consent for your app will help it be more successful with users and organizations.
+
+In this article, you'll learn about the different types of consent and how to request permissions for your application through consent.
+
+## Static user consent
+
+In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) hasn't granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time.
+
+Static permissions also enable administrators to consent on behalf of all users in the organization.
+
+While relying on static consent and a single permissions list keeps the code nice and simple, it also means that your app will request all of the permissions it might ever need up front. This can discourage users and admins from approving your app's access request.
+
+## Incremental and dynamic user consent
+
+With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the application registration information in the Azure portal. Instead, you can request permissions incrementally. You can ask for a bare minimum set of permissions upfront and request more over time as the customer uses additional application features. To do so, you can specify the scopes your application needs at any time by including the new scopes in the `scope` parameter when [requesting an access token](#requesting-individual-user-consent) - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. Incremental, or dynamic consent, only applies to delegated permissions and not to application permissions.
+
+Allowing an application to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your application requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the application over time.
+
+> [!IMPORTANT]
+> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent. The admin consent experience in the **App registrations** and **Enterprise applications** blades in the portal doesn't know about those dynamic permissions at consent time. We recommend that a developer list all the admin privileged permissions that are needed by the application in the portal. This enables tenant admins to consent on behalf of all their users in the portal, once. Users won't need to go through the consent experience for those permissions on sign in. The alternative is to use dynamic consent for those permissions. To grant admin consent, an individual admin signs in to the app, triggers a consent prompt for the appropriate permissions, and selects **consent for my entire org** in the consent dialogue.
+
+## Requesting individual user consent
+
+In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an application can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the application sends a request like the following example. (Line breaks are added for legibility).
+
+```HTTP
+GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize?
+client_id=6731de76-14a6-49ae-97bc-6eba6914391e
+&response_type=code
+&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
+&response_mode=query
+&scope=
+https%3A%2F%2Fgraph.microsoft.com%2Fcalendars.read%20
+https%3A%2F%2Fgraph.microsoft.com%2Fmail.send
+&state=12345
+```
+
+The `scope` parameter is a space-separated list of delegated permissions that the application is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the application ID URI). In the request example, the application needs permission to read the user's calendar and send mail as the user.
+
+After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions.
++
+In the following example, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are required for proper application functionality. The `offline_access` permission gives the application access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or application to correctly identify the user over time and access rudimentary user information.
++
+When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application.
+
+## Requesting consent for an entire tenant through admin consent
+
+Requesting consent for an entire tenant requires admin consent. Admin consent done on behalf of an organization requires the static permissions registered for the app. Set those permissions in the app registration portal if you need an admin to give consent on behalf of the entire organization.
+
+### Admin Consent for Delegated Permissions
+
+When your application requests [delegated permissions that require admin consent](scopes-oidc.md#admin-restricted-permissions), the user receives an error message that says they're unauthorized to consent to your app's permissions. The user is required to ask their admin for access to the app. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application unless the previously granted permissions are revoked or the application requests for a new permission incrementally.
+
+Administrators using the same application will see the admin consent prompt. The admin consent prompt provides a checkbox that allows them to grant the application access to the requested data on behalf of the users for the entire tenant. For more information on the user and admin consent experience, see [Application consent experience](application-consent-experience.md).
+
+Examples of delegated permissions for Microsoft Graph that require admin consent are:
+
+- Read all user's full profiles by using User.Read.All
+- Write data to an organization's directory by using Directory.ReadWrite.All
+- Read all groups in an organization's directory by using Groups.Read.All
+
+To view the full list of Microsoft graph permissions, see [Microsoft graph permissions reference](/graph/permissions-reference).
+
+You can also configure permissions on your own resources to require admin consent. For more information on how to add scopes that require admin consent, see [Add a scope that requires admin consent](quickstart-configure-app-expose-web-apis.md#add-a-scope-requiring-admin-consent).
+
+Some organizations may change the default user consent policy for the tenant. When your application requests access to permissions they're evaluated against these policies. The user may need to request admin consent even when not required by default. To learn how administrators manage consent policies for applications, see [Manage app consent policies](../manage-apps/manage-app-consent-policies.md).
+
+>[!NOTE]
+>In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, scope=User.Read is equivalent to `https://graph.microsoft.com/User.Read`.
+
+### Admin Consent for Application permissions
+
+Application permissions always require admin consent. Application permissions don't have a user context and the consent grant isn't done on behalf of any specific user. Instead, the client application is granted permissions directly, these types of permissions are used only by daemon services and other non-interactive applications that run in the background. Administrators need to configure the permissions upfront and [grant admin consent](../manage-apps/grant-admin-consent.md) through the Azure portal.
+
+### Admin consent for Multi-tenant applications
+
+In case the application requesting the permission is a multi-tenant application, its application registration only exists in the tenant where it was created, therefore permissions can't be configured in the local tenant. If the application requests permissions that require admin consent, the administrator needs to consent on behalf of the users. To consent to these permissions, the administrators need to log in to the application themselves, so the admin consent sign-in experience is triggered. To learn how to set up the admin consent experience for multi-tenant applications, see [Enable multi-tenant log-ins](howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent-and-make-appropriate-code-changes)
+
+An administrator can grant consent for an application with the following options.
+
+### Recommended: Sign the user into your app
+
+Typically, when you build an application that requires admin consent, the application needs a page or view in which the admin can approve the app's permissions. This page can be:
+
+- Part of the app's sign-up flow.
+- Part of the app's settings.
+- A dedicated "connect" flow.
+
+In many cases, it makes sense for the application to show the "connect" view only after a user has signed in with a work Microsoft account or school Microsoft account.
+
+When you sign the user into your app, you can identify the organization to which the admin belongs before you ask them to approve the necessary permissions. Although this step isn't strictly necessary, it can help you create a more intuitive experience for your organizational users.
+
+To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
+
+### Request the permissions in the app registration portal
+
+In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option.
+
+In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the application will request dynamically or incrementally.
+
+> [!NOTE]
+>Application permissions can be requested only through the use of [`.default`](scopes-oidc.md#the-default-scope). So if your application needs application permissions, make sure they're listed in the app registration portal.
+
+To configure the list of statically requested permissions for an application:
+
+1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations</a> quickstart experience.
+1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already.
+1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**.
+1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your application requires.
+1. Select **Add Permissions**.
+
+### Successful response
+
+If the admin approves the permissions for your app, the successful response looks like this:
+
+```HTTP
+GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True
+```
+
+| Parameter | Description |
+| | |
+| `tenant` | The directory tenant that granted your application the permissions it requested, in GUID format. |
+| `state` | A value included in the request that also will be returned in the token response. It can be a string of any content you want. The state is used to encode information about the user's state in the application before the authentication request occurred, such as the page or view they were on. |
+| `admin_consent` | Will be set to `True`. |
+
+After you've received a successful response from the admin consent endpoint, your application has gained the permissions it requested. Next, you can request a token for the resource you want.
+#### Error response
+
+If the admin doesn't approve the permissions for your app, the failed response looks like this:
+
+```HTTP
+GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request
+```
+
+| Parameter | Description |
+| | |
+| `error` | An error code string that can be used to classify types of errors that occur. It can also be used to react to errors. |
+| `error_description` | A specific error message that can help a developer identify the root cause of an error. |
+
+## Using permissions after consent
+
+After the user consents to permissions for your app, your application can acquire access tokens that represent the app's permission to access a resource in some capacity. An access token can be used only for a single resource. But encoded inside the access token is every permission that your application has been granted for that resource. To acquire an access token, your application can make a request to the Microsoft identity platform token endpoint, like this:
+
+```HTTP
+POST common/oauth2/v2.0/token HTTP/1.1
+Host: https://login.microsoftonline.com
+Content-Type: application/json
+
+{
+ "grant_type": "authorization_code",
+ "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e",
+ "scope": "https://microsoft.graph.com/Mail.Read https://microsoft.graph.com/mail.send",
+ "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...",
+ "redirect_uri": "https://localhost/myapp",
+ "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps
+}
+```
+
+You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your application has the proper permission to do a specific task.
+
+For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md).
+
+## Next steps
+
+- [Consent experience](application-consent-experience.md)
+- [ID tokens](id-tokens.md)
+- [Access tokens](access-tokens.md)
active-directory Delegated Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-access-primer.md
+
+ Title: Microsoft identity platform delegated access scenario
+description: Learn about delegated access in the Microsoft identity platform endpoint.
++++++++ Last updated : 11/01/2022++++
+# Understanding delegated access
+
+When a user signs into an app and uses it to access some other resource, like Microsoft Graph, the app will first need to ask for permission to access this resource on the userΓÇÖs behalf. This common scenario is called delegated access.
+
+> [!VIDEO https://learn-video.azurefd.net/vod/player?show=one-dev-minute&ep=how-do-delegated-permissions-work]
+
+## Why should I use delegated access?
+
+People frequently use different applications to access their data from cloud services. For example, someone might want to use a favorite PDF reader application to view files stored in their OneDrive. Another example can be a companyΓÇÖs line-of-business application that might retrieve shared information about their coworkers so they can easily choose reviewers for a request. In such cases, the client application, the PDF reader, or the companyΓÇÖs request approval tool needs to be authorized to access this data on behalf of the user who signed into the application.
+
+Use delegated access whenever you want to let a signed-in user work with their own resources or resources they can access. Whether itΓÇÖs an admin setting up policies for their entire organization or a user deleting an email in their inbox, all scenarios involving user actions should use delegated access.
+
+In contrast, delegated access is usually a poor choice for scenarios that must run without a signed-in user, like automation. It may also be a poor choice for scenarios that involve accessing many usersΓÇÖ resources, like data loss prevention or backups. Consider using [application-only access](permissions-consent-overview.md) for these types of operations.
+
+## Requesting scopes as a client app
+
+Your app will need to ask the user to grant a specific scope, or set of scopes, for the resource app you want to access. Scopes may also be referred to as delegated permissions. These scopes describe which resources and operations your app wants to perform on the userΓÇÖs behalf. For example, if you want your app to show the user a list of recently received mail messages and chat messages, you might ask the user to consent to the Microsoft Graph `Mail.Read` and `Chat.Read` scopes.
+
+Once your app has requested a scope, a user or admin will need to grant the requested access. Consumer users with Microsoft Accounts, like Outlook.com or Xbox Live accounts, can always grant scopes for themselves. Organizational users with Azure AD accounts may or may not be able to grant scopes, depending on their organizationΓÇÖs settings. If an organizational user can't consent to scopes directly, they'll need to ask their organizationΓÇÖs administrator to consent for them.
+
+Always follow the principle of least privilege: you should never request scopes that your app doesnΓÇÖt need. This principle helps limit the security risk if your app is compromised and makes it easier for administrators to grant your app access. For example, if your app only needs to list the chats a user belongs to but doesnΓÇÖt need to show the chat messages themselves, you should request the more limited Microsoft Graph `Chat.ReadBasic` scope instead of `Chat.Read`. For more information about openID scopes, see [OpenID scopes](scopes-oidc.md).
+
+## Designing and publishing scopes for a resource service
+
+If youΓÇÖre building an API and want to allow delegated access on behalf of users, youΓÇÖll need to create scopes that other apps can request. These scopes should describe the actions or resources available to the client. You should consider developer scenarios when designing your scopes.
++
+## How does delegated access work?
+
+The most important thing to remember about delegated access is that both your client app and the signed-in user need to be properly authorized. Granting a scope isn't enough. If either the client app doesnΓÇÖt have the right scope, or the user doesnΓÇÖt have sufficient rights to read or modify the resource, then the call will fail.
+
+- **Client app authorization** - Client apps are authorized by granting scopes. When a client app is granted a scope by a user or admin to access some resource, that grant will be recorded in Azure AD. All delegated access tokens that are requested by the client to access the resource on behalf of the relevant user will then contain those scopesΓÇÖ claim values in the `scp` claim. The resource app checks this claim to determine whether the client app has been granted the correct scope for the call.
+- **User authorization** - Users are authorized by the resource youΓÇÖre calling. Resource apps may use one or more systems for user authorization, such as [role-based access control](custom-rbac-for-developers.md), ownership/membership relationships, access control lists, or other checks. For example, Azure AD checks that a user has been assigned to an app management or general admin role before allowing them to delete an organizationΓÇÖs applications, but also allows all users to delete applications that they own. Similarly, SharePoint Online service checks that a user has appropriate owner or reader rights over a file before allowing that user to open it.
+
+## Delegated access example ΓÇô OneDrive via Microsoft Graph
+
+Consider the following example:
+
+Alice wants to use a client app to open a file protected by a resource API, Microsoft Graph. For user authorization, the OneDrive service will check whether the file is stored in AliceΓÇÖs drive. If itΓÇÖs stored in another userΓÇÖs drive, then OneDrive will deny AliceΓÇÖs request as unauthorized, since Alice doesn't have the right to read other usersΓÇÖ drives.
+
+For client app authorization, OneDrive will check whether the client making the call has been granted the `Files.Read` scope on behalf of the signed-in user. In this case, the signed-in user is Alice. If `Files.Read` hasnΓÇÖt been granted to the app for Alice, then OneDrive will also fail the request.
+
+| GET /drives/{id}/files/{id} | Client app granted `Files.Read` scope for Alice | Client app not granted `Files.Read` scope for Alice |
+| -- | -- | -- |
+| The document is in AliceΓÇÖs OneDrive. | 200 ΓÇô Access granted. | 403 - Unauthorized. Alice (or her admin) hasnΓÇÖt allowed this client to read her files. |
+| The document is in another userΓÇÖs OneDrive*. | 403 - Unauthorized. Alice doesnΓÇÖt have rights to read this file. Even though the client has been granted `Files.Read` it should be denied when acting on AliceΓÇÖs behalf. | 403 ΓÇô Unauthorized. Alice doesnΓÇÖt have rights to read this file, and the client isnΓÇÖt allowed to read files she has access to either. |
+
+The example given is simplified to illustrate delegated authorization. The production OneDrive service supports many other access scenarios, such as shared files.
+
+## Next steps
+
+- [Open connect scopes](scopes-oidc.md)
+- [RBAC roles](custom-rbac-for-developers.md)
+- [Microsoft Graph permissions reference](/graph/permissions-reference)
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Previously updated : 06/13/2022 Last updated : 09/27/2022
Role-based access control (RBAC) is a popular mechanism to enforce authorization
By using RBAC with application role and role claims, developers can securely enforce authorization in their apps with less effort.
-Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used in tandem to provide even finer-grained access control.
+Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used together to provide even finer-grained access control.
## Declare roles for an application
-You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted individually to the user and the user's group memberships. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
+You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
Currently, if you add a service principal to a group, and then assign an app role to that group, Azure AD doesn't add the `roles` claim to tokens it issues.
-App roles are declared using the app roles by using [App roles UI](#app-roles-ui) in the Azure portal:
+App roles are declared using App roles UI in the Azure portal:
The number of roles you add counts toward application manifest limits enforced by Azure AD. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md).
The **Status** column should reflect that consent has been **Granted for \<tenan
If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registrations**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
-If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the access token. Your next step is to add code to your web API to check for those roles when the API is called.
+If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an ID token to call the API, a roles claim is included in the ID token. Your next step is to add code to your web API to check for those roles when the API is called.
To learn how to add authorization to your web API, see [Protected web API: Verify scopes and app roles](scenario-protected-web-api-verification-scope-app-roles.md).
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
In MSAL, logging is set at application creation using the `.WithLogging` builder modifier. This method takes optional parameters:
+- `IIdentityLogger` is the logging implementation used by MSAL.NET to produce logs for debugging or health check purposes. Logs are only sent if logging is enabled.
- `Level` enables you to decide which level of logging you want. Setting it to Errors will only get errors-- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default, this is set to false, so that your application doesn't log personal data.
+- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default, this parameter is set to false, so that your application doesn't log personal data.
- `LogCallback` is set to a delegate that does the logging. If `PiiLoggingEnabled` is true, this method will receive messages that can have PII, in which case the `containsPii` flag will be set to true. - `DefaultLoggingEnabled` enables the default logging for the platform. By default it's false. If you set it to true it uses Event Tracing in Desktop/UWP applications, NSLog on iOS and logcat on Android.
-```csharp
-class Program
+### IIdentityLogger Interface
+```CSharp
+namespace Microsoft.IdentityModel.Abstractions
{
- private static void Log(LogLevel level, string message, bool containsPii)
- {
- if (containsPii)
- {
- Console.ForegroundColor = ConsoleColor.Red;
- }
- Console.WriteLine($"{level} {message}");
- Console.ResetColor();
- }
-
- static void Main(string[] args)
- {
- var scopes = new string[] { "User.Read" };
-
- var application = PublicClientApplicationBuilder.Create("<clientID>")
- .WithLogging(Log, LogLevel.Info, true)
- .Build();
-
- AuthenticationResult result = application.AcquireTokenInteractive(scopes)
- .ExecuteAsync().Result;
- }
+ public interface IIdentityLogger
+ {
+ //
+ // Summary:
+ // Checks to see if logging is enabled at given eventLogLevel.
+ //
+ // Parameters:
+ // eventLogLevel:
+ // Log level of a message.
+ bool IsEnabled(EventLogLevel eventLogLevel);
+
+ //
+ // Summary:
+ // Writes a log entry.
+ //
+ // Parameters:
+ // entry:
+ // Defines a structured message to be logged at the provided Microsoft.IdentityModel.Abstractions.LogEntry.EventLogLevel.
+ void Log(LogEntry entry);
+ }
} ```
+> [!NOTE]
+> Partner libraries (`Microsoft.Identity.Web`, `Microsoft.IdentityModel`) provide implementations of this interface already for various environments (in particular ASP.NET Core)
+
+### IIdentityLogger Implementation
+
+The following code snippets are examples of such an implementation. If you use the .NET core configuration, environment variable driven logs levels can be provided for free, in addition to the configuration file based log levels.
+
+#### Log level from configuration file
+
+It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](https://docs.microsoft.com/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application.
+
+#### Log Level as Environment Variable
+
+Another option we recommended is to configure your code to use an environment variable on the machine to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production.
+
+See [EventLogLevel](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/blob/dev/src/Microsoft.IdentityModel.Abstractions/EventLogLevel.cs) for details on the available log levels.
+
+Example:
+
+```CSharp
+ class MyIdentityLogger : IIdentityLogger
+ {
+ public EventLogLevel MinLogLevel { get; }
+
+ public TestIdentityLogger()
+ {
+ //Try to pull the log level from an environment variable
+ var msalEnvLogLevel = Environment.GetEnvironmentVariable("MSAL_LOG_LEVEL");
+
+ if (Enum.TryParse(msalEnvLogLevel, out EventLogLevel msalLogLevel))
+ {
+ MinLogLevel = msalLogLevel;
+ }
+ else
+ {
+ //Recommended default log level
+ MinLogLevel = EventLogLevel.Informational;
+ }
+ }
+
+ public bool IsEnabled(EventLogLevel eventLogLevel)
+ {
+ return eventLogLevel <= MinLogLevel;
+ }
+
+ public void Log(LogEntry entry)
+ {
+ //Log Message here:
+ Console.WriteLine(entry.message);
+ }
+ }
+```
+
+Using `MyIdentityLogger`:
+```CSharp
+ MyIdentityLogger myLogger = new MyIdentityLogger(logLevel);
+
+ var app = ConfidentialClientApplicationBuilder
+ .Create(TestConstants.ClientId)
+ .WithClientSecret("secret")
+ .WithExperimentalFeatures() //Currently an experimental feature, will be removed soon
+ .WithLogging(myLogger, piiLogging)
+ .Build();
+```
+ > [!TIP] > See the [MSAL.NET wiki](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki) for samples of MSAL.NET logging and more.
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
Title: Overview of permissions and consent in the Microsoft identity platform
-description: Learn about the foundational concepts and scenarios around consent and permissions in the Microsoft identity platform
+description: Learn the foundational concepts and scenarios around consent and permissions in the Microsoft identity platform
Previously updated : 05/10/2022 Last updated : 11/01/2022 + #Customer intent: As and a developer or admin in the Microsoft identity platform, I want to understand the basic concept about managing how applications access resources through the permissions and consent framework. # Introduction to permissions and consent
To _access_ a protected resource like email or calendar data, your application n
## Access scenarios
-As an application developer, you must identify how your application will access data. The application can use delegated access, acting on behalf of a signed-in user, or direct access, acting only as the application's own identity.
+As an application developer, you must identify how your application will access data. The application can use delegated access, acting on behalf of a signed-in user, or app-only access, acting only as the application's own identity.
![Image shows illustration of access scenarios.](./media/permissions-consent-overview/access-scenarios.png) ### Delegated access (access on behalf of a user)
-In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request.
+In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request. For more information about the delegated access scenario, see [delegated access scenario](delegated-access-primer.md).
-For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions of a given resource that the client application exercises on behalf of a user. They're strings that represent what the application wants to do on behalf of the user. For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions).
+For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions for a given resource that represent what a client application can access on behalf of the user.For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions).
-For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by [Exchange Online RBAC](/exchange/permissions-exo/permissions-exo).
+For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by Exchange Online RBAC. For more information on RBAC for applications, see [RBAC for applications](custom-rbac-for-developers.md).
-### Direct access (App-only access)
+### App-only access (Access without a user)
In this access scenario, the application acts on its own with no user signed in. Application access is used in scenarios such as automation, and backup. This scenario includes apps that run as background services or daemons. It's appropriate when it's undesirable to have a specific user signed in, or when the data required can't be scoped to a single user.
-Direct access may require application permissions but this isn't the only way for granting an application direct access. Application permissions can be referred to as app roles. When app roles are granted to other applications, they can be called applications permissions. The appropriate application permissions or app roles must be granted to the application for it to access the resource. For more information about assigning app roles to applications, see [App roles for applications](howto-add-app-roles-in-azure-ad-apps.md).
+App-only access uses app roles instead of delegated scopes. When granted through consent, app roles may also be called applications permissions. For app-only access, the client app must be granted appropriate app roles of the resource app it's calling in order to access the requested data. For more information about assigning app roles to client applications, see [Assigning app roles to applications](howto-add-app-roles-in-azure-ad-apps.md#assign-app-roles-to-applications).
## Types of permissions
-**Delegated permissions** are used in the delegated access scenario. They're permissions that allow the application to act on a user's behalf. The application will never be able to access anything users themselves couldn't access.
+**Delegated permissions** are used in the delegated access scenario. They're permissions that allow the application to act on a user's behalf. The application will never be able to access anything the signed in user themselves couldn't access.
For example, imagine an application that has been granted the Files.Read.All delegated permission on behalf of Tom, the user. The application will only be able to read files that Tom can personally access.
-**Application permissions** are used in the direct access scenario, without a signed-in user present. The application will be able to access any data that the permission is associated with. For example, an application granted the Files.Read.All application permission will be able to read any file in the tenant. Only an administrator or owner of the service principal can consent to application permissions.
+**Application permissions**, sometimes called app roles are used in the app-only access scenario, without a signed-in user present. The application will be able to access any data that the permission is associated with. For example, an application granted the Files.Read.All application permission will be able to read any file in the tenant. Only an administrator or owner of the service principal can consent to application permissions.
+
+There are other ways in which applications can be granted authorization for app-only access. For example, an application can be assigned an Azure AD RBAC role.
-There are other ways in which applications can be granted authorization for direct access. For example, an application can be assigned an Azure AD RBAC role.
+### Comparison of delegated and application permissions
+
+| <!-- No header--> | Delegated permissions | Application permissions |
+|--|--|--|
+| Types of apps | Web / Mobile / single-page app (SPA) | Web / Daemon |
+| Access context | Get access on behalf of a user | Get access without a user |
+| Who can consent | - Users can consent for their data <br> - Admins can consent for all users | Only admin can consent |
+| Other names | - Scopes <br> - OAuth2 permission scopes | - App roles <br> - App-only permissions |
+| Result of consent (specific to Microsoft Graph) | [oAuth2PermissionGrant](/graph/api/resources/oauth2permissiongrant) | [appRoleAssignment](/graph/api/resources/approleassignment) |
## Consent
-One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt.
+One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt. Other scenarios where users may see a consent prompt include:
+
+- When previously granted consent is revoked.
+- When the application is coded to specifically prompt for consent during every sign-in.
+- When the application uses incremental or dynamic consent to ask for some permissions upfront and more permission later as needed.
The key details of a consent prompt are the list of permissions the application requires and the publisher information. For more information about the consent prompt and the consent experience for both admins and end-users, see [application consent experience](application-consent-experience.md). ### User consent
-User consent happens when a user attempts to sign into an application. The user provides their sign-in credentials. These credentials are checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is shown a consent prompt and asked to grant the application the requested permissions. In many cases, an admin may be required to grant consent on behalf of the user.
+User consent happens when a user attempts to sign into an application. The user provides their sign-in credentials. These credentials are checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is shown a consent prompt, and asked to grant the application the requested permissions. In many cases, an admin may be required to grant consent on behalf of the user.
### Administrator consent
-Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. For example, application permissions can only be consented to by an administrator. Administrators can grant consent for themselves or for the entire organization. For more information about user and admin consent, see [user and admin consent overview](../manage-apps/consent-and-permissions-overview.md)
+Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. For example, application permissions and many high-privilege delegated permissions can only be consented to by an administrator. Administrators can grant consent for themselves or for the entire organization. For more information about user and admin consent, see [user and admin consent overview](../manage-apps/consent-and-permissions-overview.md).
### Preauthorization Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph. ## Next steps
+- [Delegated access scenario](delegated-access-primer.md)
- [User and admin consent overview](../manage-apps/consent-and-permissions-overview.md)-- [Scopes and permissions](v2-permissions-and-consent.md)
+- [OpenID connect scopes](scopes-oidc.md)
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
# Single-page application: Acquire a token to call an API
-The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it attempts to use its refresh token to get the token. If the refresh token's 24-hour lifetime has expired, MSAL.js will open a hidden iframe to silently request a new authorization code, which it will exchange for a new, valid refresh token. For more information about single sign-on (SSO) session and token lifetime values in Azure Active Directory (Azure AD), see [Token lifetimes](active-directory-configurable-token-lifetimes.md).
+The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found for the given parameters, it will throw an `InteractionRequiredAuthError`, which should be handled with an interactive token request method (`acquireTokenPopup` or `acquireTokenRedirect`). If an access token is found but it's expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token).
The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking third party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
active-directory Scopes Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scopes-oidc.md
+
+ Title: Microsoft identity platform scopes and permissions
+description: Learn about openID connect scopes and permissions in the Microsoft identity platform endpoint.
++++++++ Last updated : 11/01/2022+++
+# Scopes and permissions in the Microsoft identity platform
+
+The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
+
+In this article, you'll learn about scopes and permissions in the identity platform.
+
+The following list shows are some examples of Microsoft web-hosted resources:
+
+- Microsoft Graph: `https://graph.microsoft.com`
+- Microsoft 365 Mail API: `https://outlook.office.com`
+- Azure Key Vault: `https://vault.azure.net`
+
+The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
+
+- Read a user's calendar
+- Write to a user's calendar
+- Send mail as a user
+
+Because of these types of permission definitions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf.
+
+When a resource's functionality is chunked into small permission sets, third-party apps can be built to request only the permissions that they need to perform their function. Users and administrators can know what data the app can access. And they can be more confident that the app isn't behaving with malicious intent. Developers should always abide by the principle of least privilege, asking for only the permissions they need for their applications to function.
+
+In OAuth 2.0, these types of permission sets are called *scopes*. They're also often referred to as *permissions*. In the Microsoft identity platform, a permission is represented as a string value. An app requests the permissions it needs by specifying the permission in the `scope` query parameter. Identity platform supports several well-defined [OpenID Connect scopes](#openid-connect-scopes) and resource-based permissions (each permission is indicated by appending the permission value to the resource's identifier or application ID URI). For example, the permission string `https://graph.microsoft.com/Calendars.Read` is used to request permission to read users calendars in Microsoft Graph.
+
+In requests to the authorization server, for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
+
+## Admin-restricted permissions
+
+Permissions in the Microsoft identity platform can be set to admin restricted. For example, many higher-privilege Microsoft Graph permissions require admin approval. If your app requires admin-restricted permissions, an organization's administrator must consent to those scopes on behalf of the organization's users. The following section gives examples of these kinds of permissions:
+
+- Read all user's full profiles by using `User.Read.All`
+- Write data to an organization's directory by using `Directory.ReadWrite.All`
+- Read all groups in an organization's directory by using `Groups.Read.All`
+
+> [!NOTE]
+>In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
+
+Although a consumer user might grant an application access to this kind of data, organizational users can't grant access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions.
+
+If the application requests application permissions and an administrator grants these permissions this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions should only be used by daemon services and other non-interactive applications that run in the background. For more information on the direct access scenario, see [Access scenarios in the Microsoft identity platform](permissions-consent-overview.md).
+
+For a step by step guide on how to expose scopes in a web API, see [Configure an application to expose a web API](quickstart-configure-app-expose-web-apis.md).
+
+## OpenID Connect scopes
+
+The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes aren't supported.
+
+If you request the OpenID Connect scopes and a token, you'll get a token to call the [UserInfo endpoint](userinfo.md).
+
+### openid
+
+If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission.
+
+By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication.
+
+### email
+
+The `email` scope can be used with the `openid` scope and any other scopes. It gives the app access to the user's primary email address in the form of the `email` claim.
+
+The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If your app uses the `email` scope, the app needs to be able to handle a case in which no `email` claim exists in the token.
+
+### profile
+
+The `profile` scope can be used with the `openid` scope and any other scope. It gives the app access to a large amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID.
+
+For a complete list of the `profile` claims available in the `id_tokens` parameter for a specific user, see the [`id_tokens` reference](id-tokens.md).
+
+### offline_access
+
+The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the **Maintain access to data you have given it access to** permission.
+
+When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire.
+
+> [!NOTE]
+> This permission currently appears on all consent pages, even for flows that don't provide a refresh token (such as the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This setup addresses scenarios where a client can begin within the implicit flow and then move to the code flow where a refresh token is expected.
+
+On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. So when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint.
+
+The access token is valid for a short time. It usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions.
+
+For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md).
+
+## The .default scope
+
+The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list).
+
+The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
+
+Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
+
+The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
+
+Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
+
+### .default when the user has already given consent
+
+The `.default` scope parameter only triggers a consent prompt if consent hasn't been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
+
+If consent exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
+
+For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions that have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
+
+#### Example 1: The user, or tenant admin, has granted permissions
+
+In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client.
+
+If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`.
+
+#### Example 2: The user hasn't granted permissions between the client and the resource
+
+In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
+
+When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph.
+
+#### Example 3: The user has consented, and the client requests more scopes
+
+In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope.
+
+The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they'll be shown the consent prompt. (If not, they'll be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`.
+
+### Using the .default scope with the client
+
+In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario.
+
+The scenario accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform.
++
+```http
+// Line breaks are for legibility only.
+
+GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
+ ?response_type=token //Code or a hybrid flow is also possible here
+ &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
+ &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
+ &redirect_uri=https%3A%2F%2Flocalhost
+ &state=1234
+```
+
+This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
+
+### Client credentials grant flow and .default
+
+Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
+
+To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
+
+Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token.
+
+To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
+
+### Trailing slash and .default
+
+Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
+
+## Next steps
+
+- [Requesting permissions through consent in the identity platform](consent-types-developer.md)
+- [ID tokens in the Microsoft identity platform](id-tokens.md)
+- [Access tokens in the Microsoft identity platform](access-tokens.md)
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-least-privileged-access.md
A reducible permission is a permission that has a lower-privileged counterpart t
## Use consent to control access to data
-Most applications require access to protected data, and the owner of that data needs to [consent](application-consent-experience.md#consent-and-permissions) to that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access.
+Most applications require access to protected data, and the owner of that data needs to [consent](consent-types-developer.md) to that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access.
Whenever an application that runs in a device requests access to protected data, the application should ask for the consent of the user before granting access to the protected data. The user is required to grant (or deny) consent for the requested permission before the application can progress.
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
The UserInfo endpoint is typically called automatically by [OIDC-compliant libra
## Consider using an ID token instead
-The information in an ID token is a superset of the information available on UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest getting the user's information from the token instead calling the UserInfo endpoint. Using the ID token instead of calling the UserInfo endpoint eliminates up to two network requests, reducing latency in your application.
+The information in an ID token is a superset of the information available on UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest getting the user's information from the token instead of calling the UserInfo endpoint. Using the ID token instead of calling the UserInfo endpoint eliminates up to two network requests, reducing latency in your application.
If you require more details about the user like manager or job title, call the [Microsoft Graph `/user` API](/graph/api/user-get). You can also use [optional claims](active-directory-optional-claims.md) to include additional user information in your ID and access tokens.
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
Applications in Microsoft identity platform rely on consent in order to gain acc
In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time.
-Static permissions also enables administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization.
+Static permissions also enable administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization.
While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers:
After the user enters their credentials, the Microsoft identity platform checks
At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
-![Example screenshot that shows work account consent.](./media/v2-permissions-and-consent/work_account_consent.png)
When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application.
The scope parameter value is constructed by using the identifier URI for the res
Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
-The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). It's use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
+The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
Clients can't combine static (`.default`) consent and dynamic consent in a singl
The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
-If consent does exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
+If consent does exist, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 09/03/2022 Last updated : 11/01/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## October 2022
+
+### Updated articles
+
+- [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md)
+- [Configure an app to trust an external identity provider](workload-identity-federation-create-trust.md)
+- [Configure a user-assigned managed identity to trust an external identity provider (preview)](workload-identity-federation-create-trust-user-assigned-managed-identity.md)
+- [Configuration requirements and troubleshooting tips for Xamarin Android with MSAL.NET](msal-net-xamarin-android-considerations.md)
+- [Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
+- [Desktop app that calls web APIs: Acquire a token using Device Code flow](scenario-desktop-acquire-token-device-code-flow.md)
+- [Desktop app that calls web APIs: Acquire a token using integrated Windows authentication](scenario-desktop-acquire-token-integrated-windows-authentication.md)
+- [Desktop app that calls web APIs: Acquire a token using Username and Password](scenario-desktop-acquire-token-username-password.md)
+- [Making your application multi-tenant](howto-convert-app-to-be-multi-tenant.md)
+- [Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md)
+- [Prompt behavior with MSAL.js](msal-js-prompt-behavior.md)
+- [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md)
+- [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application](tutorial-v2-javascript-spa.md)
+- [Tutorial: Sign in users and call the Microsoft Graph API from a React single-page app (SPA) using auth code flow](tutorial-v2-react.md)
+ ## September 2022 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Protected web API: Code configuration](scenario-protected-web-api-app-configuration.md) - [Provide optional claims to your app](active-directory-optional-claims.md) - [Using directory extension attributes in claims](active-directory-schema-extensions.md)-
-## July 2022
-
-### New articles
--- [Configure SAML app multi-instancing for an application in Azure Active Directory](reference-app-multi-instancing.md)-
-### Updated articles
--- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)-- [Application configuration options](msal-client-application-configuration.md)-- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- [Microsoft identity platform access tokens](access-tokens.md)-- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)-- [Tutorial: Add sign-in to Microsoft to an ASP.NET web app](tutorial-v2-asp-webapp.md)
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Hybrid Azure AD join requires devices to have access to the following Microsoft
- Your organization's Security Token Service (STS) (**For federated domains**) > [!WARNING]
-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://devices.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
+> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://device.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 or newer computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## October 2022
+
+### Updated articles
+
+- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)
+- [Quickstart: Add a guest user and send an invitation](b2b-quickstart-add-guest-users-portal.md)
+- [Define custom attributes for user flows](user-flow-add-custom-attributes.md)
+- [Create dynamic groups in Azure Active Directory B2B collaboration](use-dynamic-groups.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
+- [Leave an organization as an external user](leave-the-organization.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)
+- [Example: Configure SAML/WS-Fed based identity provider federation with AD FS](direct-federation-adfs.md)
+- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+- [Add Microsoft account (MSA) as an identity provider for External Identities](microsoft-account.md)
+- [How users in your organization can invite guest users to an app](add-users-information-worker.md)
+ ## September 2022 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) - [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - [Azure Active Directory External Identities: What's new](whats-new-docs.md)-
-## July 2022
-
-### Updated articles
--- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md)-- [B2B direct connect overview](b2b-direct-connect-overview.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch)
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch).
+- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](https://learn.microsoft.com/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant).
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory User Admin Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/user-admin-consent-overview.md
+
+ Title: Overview of user and admin consent
+description: Learn about the fundamental concepts of user and admin consent in Azure AD
+++++++ Last updated : 09/28/2022++++
+# User and admin consent in Azure Active Directory
+
+In this article, youΓÇÖll learn the foundational concepts and scenarios around user and admin consent in Azure Active Directory (Azure AD).
+
+Consent is a process where users can grant permission for an application to access a protected resource. To indicate the level of access required, an application requests the API permissions it requires. For example, an application can request the permission to see a signed-in user's profile and read the contents of the user's mailbox.
+
+Consent can be initiated in various ways. For example, users can be prompted for consent when they attempt to sign in to an application for the first time. Depending on the permissions they require, some applications might require an administrator to be the one who grants consent.
+
+## User consent
+
+A user can authorize an application to access some data at the protected resource, while acting as that user. The permissions that allow this type of access are called "delegated permissions."
+
+User consent is usually initiated when a user signs in to an application. After the user has provided sign-in credentials, they're checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is directed to the consent prompt window to grant the application the requested permissions.
+
+User consent by non-administrators is possible only in organizations where user consent is allowed for the application and for the set of permissions the application requires. If user consent is disabled, or if users aren't allowed to consent for the requested permissions, they won't be prompted for consent. If users are allowed to consent and they accept the requested permissions, the consent is recorded and they usually don't have to consent again on future sign-ins to the same application.
+
+### User consent settings
+
+Users are in control of their data. A Privileged Administrator can configure whether non-administrator users are allowed to grant user consent to an application. This setting can take into account aspects of the application and the application's publisher, and the permissions being requested.
+
+As an administrator, you can choose whether user consent is allowed. If you choose to allow user consent, you can also choose what conditions must be met before an application can be consented to by a user.
+
+By choosing which application consent policies apply for all users, you can set limits on when users are allowed to grant consent to applications and on when theyΓÇÖll be required to request administrator review and approval. The Azure portal provides the following built-in options:
+
+- *You can disable user consent*. Users can't grant permissions to applications. Users continue to sign in to applications they've previously consented to or to applications that administrators have granted consent to on their behalf, but they won't be allowed to consent to new permissions to applications on their own. Only users who have been granted a directory role that includes the permission to grant consent can consent to new applications.
+
+- *Users can consent to applications from verified publishers or your organization, but only for permissions you select*. All users can consent only to applications that were published by a [verified publisher](../develop/publisher-verification-overview.md) and applications that are registered in your tenant. Users can consent only to the permissions that you've classified as *low impact*. You must [classify permissions](configure-permission-classifications.md) to select which permissions users are allowed to consent to.
+
+- *Users can consent to all applications*. This option allows all users to consent to any permissions that don't require admin consent, for any application.
+
+For most organizations, one of the built-in options will be appropriate. Some advanced customers might want more control over the conditions that govern when users are allowed to consent. These customers can [create custom app consent policy](manage-app-consent-policies.md#create-a-custom-app-consent-policy) and configure those policies to apply to user consent.
+
+## Admin consent
+
+During admin consent, a Privileged Administrator may grant an application access on behalf of other users (usually, on behalf of the entire organization). Also during admin consent, applications or services provide direct access to an API, which can be used by the application if there's no signed-in user.
+
+When your organization purchases a license or subscription for a new application, you might proactively want to set up the application so that all users in the organization can use it. To avoid the need for user consent, an administrator can grant consent for the application on behalf of all users in the organization.
+
+After an administrator grants admin consent on behalf of the organization, users aren't usually prompted for consent for that application. In certain cases, a user might be prompted for consent even after consent was granted by an administrator. An example might be if an application requests another permission that the administrator hasn't already granted.
+
+Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of the organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation.
+
+Before you grant tenant-wide admin consent, ensure that you trust the application and the application publisher, for the level of access you're granting. If you aren't confident that you understand who controls the application and why the application is requesting the permissions, do *not* grant consent.
+
+For step-by-step guidance on whether to grant an application admin consent, see [Evaluating a request for tenant-wide admin consent](manage-consent-requests.md#evaluate-a-request-for-tenant-wide-admin-consent).
+
+For step-by-step instructions for granting tenant-wide admin consent from the Azure portal, see [Grant tenant-wide admin consent to an application](grant-admin-consent.md).
+
+### Grant consent on behalf of a specific user
+
+Instead of granting consent for an entire organization, an admin can also use the [Microsoft Graph API](/graph/use-the-api) to grant consent to delegated permissions on behalf of a single user. For a detailed example that uses Microsoft Graph PowerShell, see [Grant consent on behalf of a single user by using PowerShell](manage-consent-requests.md).
+
+### Limit user access to an application
+
+User access to applications can still be limited, even when tenant-wide admin consent has been granted. Configure the applicationΓÇÖs properties to require user assignment to limit user access to the application. For more information, see [Methods for assigning users and groups](assign-user-or-group-access-portal.md).
+
+For a broader overview, including how to handle other complex scenarios, see [Use Azure AD for application access management](what-is-access-management.md).
+
+## Admin consent workflow
+
+The admin consent workflow gives users a way to request admin consent for applications when they aren't allowed to consent themselves. When the admin consent workflow is enabled, users are presented with an "Approval required" window for requesting admin approval for access to the application.
+
+After users submit the admin consent request, the admins who have been designated as reviewers receive a notification. The users are notified after a reviewer has acted on their request. For step-by-step instructions for configuring the admin consent workflow by using the Azure portal, see [configure the admin consent workflow](configure-admin-consent-workflow.md).
+
+### How users request admin consent
+
+After the admin consent workflow is enabled, users can request admin approval for an application that they're unauthorized to consent to. Here are the steps in the process:
+
+1. A user attempts to sign in to the application.
+1. An **Approval required** message appears. The user types a justification for needing access to the application and then selects "Request approval."
+1. A **Request sent** message confirms that the request was submitted to the admin. If the user sends several requests, only the first request is submitted to the admin.
+1. The user receives an email notification when the request is approved, denied, or blocked.
+
+## Next steps
+
+- [Configure user consent settings](configure-user-consent.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 10/03/2022 Last updated : 11/01/2022
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## October 2022
+
+### Updated articles
+
+- [Configure how users consent to applications](configure-user-consent.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md)
+- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)
+- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-azure-ad-integration.md)
+ ## September 2022 ### New articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
### Updated articles - [Hide an enterprise application](hide-application-from-user-portal.md)-
-## July 2022
-
-### New articles
--- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md)-- [Deletion and recovery of applications FAQ](delete-recover-faq.yml)-- [Recover deleted applications in Azure Active Directory FAQs](recover-deleted-apps-faq.md)-- [Restore an enterprise application in Azure AD](restore-application.md)-- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)-- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-azure-ad-integration.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)-
-### Updated articles
--- [Delete an enterprise application](delete-application-portal.md)-- [Configure Azure Active Directory SAML token encryption](howto-saml-token-encryption.md)-- [Review permissions granted to applications](manage-application-permissions.md)-- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md)
active-directory How To View Applied Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md
Title: View applied Conditional Access policies in Azure AD sign-in logs
-description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the impact of those policies.
+description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the effect of those policies.
-+ - Previously updated : 09/14/2022- Last updated : 10/31/2022+
# View applied Conditional Access policies in Azure AD sign-in logs
-With Conditional Access policies, you can control how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what impact your Conditional Access policies have on sign-ins to your tenant, so that you can take action if necessary.
+With Conditional Access policies, you can control how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what effect your Conditional Access policies have on sign-ins to your tenant, so that you can take action if necessary.
-The sign-in logs in Azure Active Directory (Azure AD) give you the information that you need to assess the impact of your policies. This article explains how to view applied Conditional Access policies in those logs.
+The sign-in logs in Azure Active Directory (Azure AD) give you the information that you need to assess the effect of your policies. This article explains how to view applied Conditional Access policies in those logs.
## What you should know
Some scenarios require you to get an understanding of how your Conditional Acces
- *Helpdesk administrators* who need to look at applied Conditional Access policies to understand if a policy is the root cause of a ticket that a user opened. -- *Tenant administrators* who need to verify that Conditional Access policies have the intended impact on the users of a tenant.
+- *Tenant administrators* who need to verify that Conditional Access policies have the intended effect on the users of a tenant.
You can access the sign-in logs by using the Azure portal, Microsoft Graph, and PowerShell.
To view the sign-in logs, use:
`Get-MgAuditLogSignIn`
-For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://learn.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0).
+For more information about this cmdlet, see [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin).
The Azure AD Graph PowerShell module doesn't support viewing applied Conditional Access policies. Only the Microsoft Graph PowerShell module returns applied Conditional Access policies.
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
Title: Access activity logs in Azure AD | Microsoft Docs description: Learn how to choose the right method for accessing the activity logs in Azure AD. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
active-directory Howto Analyze Activity Logs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md
Title: Analyze activity logs using Azure Monitor logs | Microsoft Docs description: Learn how to analyze Azure Active Directory activity logs using Azure Monitor logs -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
To follow along, you need:
* A [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). * First, complete the steps to [route the Azure AD activity logs to your Log Analytics workspace](howto-integrate-activity-logs-with-log-analytics.md). * [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace
-* The following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal)
+* The following roles in Azure Active Directory (if you're accessing Log Analytics through Azure Active Directory portal)
- Security Admin - Security Reader - Report Reader
You can also set up alerts on your query. For example, to configure an alert whe
4. Select the **Action Group** that will be alerted when the signal occurs. You can choose to notify your team via email or text message, or you could automate the action using webhooks, Azure functions or logic apps. Learn more about [creating and managing alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md).
-5. Once you have configured the alert, select **Create alert** to enable it.
+5. Once you've configured the alert, select **Create alert** to enable it.
## Use pre-built workbooks for Azure AD activity logs The workbooks provide several reports related to common scenarios involving audit, sign-in, and provisioning events. You can also alert on any of the data provided in the reports, using the steps described in the previous section.
-* **Provisioning analysis**: This [workbook](../app-provisioning/application-provisioning-log-analytics.md) shows reports related to auditing provisioning activity, such as the number of new users provisioned and provisioning failures, number of users updated and update failures and the number of users de-provisioned and corresponding failures.
-* **Sign-ins Events**: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, as well as a summary view tracking the number of sign-ins over time.
-* **Conditional access insights**: The Conditional Access insights and reporting [workbook](../conditional-access/howto-conditional-access-insights-reporting.md) enables you to understand the impact of Conditional Access policies in your organization over time.
+* **Provisioning analysis**: This [workbook](../app-provisioning/application-provisioning-log-analytics.md) shows reports related to auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users de-provisioned, and corresponding failures.
+* **Sign-ins Events**: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, and a summary view tracking the number of sign-ins over time.
+* **Conditional access insights**: The Conditional Access insights and reporting [workbook](../conditional-access/howto-conditional-access-insights-reporting.md) enables you to understand the effect of Conditional Access policies in your organization over time.
## Next steps
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
Title: Prerequisites for Azure Active Directory reporting API | Microsoft Docs description: Learn about the prerequisites to access the Azure AD reporting API -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ # Prerequisites to access the Azure Active Directory reporting API
-The [Azure Active Directory (Azure AD) reporting APIs](./concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from a number of programming languages and tools.
+The [Azure Active Directory (Azure AD) reporting APIs](./concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from many programming languages and tools.
The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
This section shows you how to get the following settings from your directory:
- Client ID - Client secret or certificate
-You need these values when configuring calls to the reporting API. We recommend using a certificate because it is more secure.
+You need these values when configuring calls to the reporting API. We recommend using a certificate because it's more secure.
### Get your domain name
You need these values when configuring calls to the reporting API. We recommend
**To get your application's client ID:**
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, click **Azure Active Directory**.
+1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
![Screenshot shows Azure Active Directory selected from the Azure portal menu to get application's client ID.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
You need these values when configuring calls to the reporting API. We recommend
**To get your application's client secret:**
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, click **Azure Active Directory**.
+1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
![Screenshot shows Azure Active Directory selected from the Azure portal menu to get application's client secret.](./media/howto-configure-prerequisites-for-reporting-api/01.png) 2. Select your application from the **App Registrations** page.
-3. Select **Certificates and Secrets** on the **API Application** page, in the **Client Secrets** section, click **+ New Client Secret**.
+3. Select **Certificates and Secrets** on the **API Application** page, in the **Client Secrets** section, select **+ New Client Secret**.
![Screenshot shows the Certificates & secrets page where you can add a client secret.](./media/howto-configure-prerequisites-for-reporting-api/12.png)
You need these values when configuring calls to the reporting API. We recommend
b. As **Expires**, select **In 2 years**.
- c. Click **Save**.
+ c. Select **Save**.
d. Copy the key value.
If you run into this error message while trying to access sign-ins using Graph E
![Modify permissions UI](./media/troubleshoot-graph-api/modify-permissions.png)
-### Error: Tenant is not B2C or tenant doesn't have premium license
+### Error: Tenant isn't B2C or tenant doesn't have premium license
Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license.
-### Error: The allowed roles does not include User.
+### Error: The allowed roles doesn't include User.
Avoid errors trying to access audit logs or sign-in using the API. Make sure your account is part of the **Security Reader** or **Report Reader** role in your Azure Active Directory tenant.
-### Error: Application missing AAD 'Read directory data' permission
+### Error: Application missing Azure AD 'Read directory data' permission
### Error: Application missing Microsoft Graph API 'Read all audit log data' permission
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
Title: How to download logs in Azure Active Directory | Microsoft Docs description: Learn how to download activity logs in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
active-directory Howto Find Activity Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-find-activity-reports.md
Title: Find user activity reports in Azure portal | Microsoft Docs description: Learn where the Azure Active Directory user activity reports are in the Azure portal. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
In this article, you learn how to find Azure Active Directory (Azure AD) user ac
The audit logs report combines several reports around application activities into a single view for context-based reporting. To access the audit logs report: 1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select your directory from the top-right corner, then select the **Azure Active Directory** blade from the left navigation pane.
-3. Select **Audit logs** from the **Activity** section of the Azure Active Directory blade.
+1. Select **Audit logs** from the **Activity** section of Azure Active Directory.
![Audit logs](./media/howto-find-activity-reports/482.png "Audit logs")
The **Sign-ins** view includes all user sign-ins, as well as the **Application U
To access the sign-ins report: 1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select your directory from the top-right corner, then select the **Azure Active Directory** blade from the left navigation pane.
+2. Select your directory from the top-right corner, then select **Azure Active Directory** from the left navigation pane.
3. Select **Signins** from the **Activity** section of the Azure Active Directory blade. ![Sign-ins view](./media/howto-find-activity-reports/483.png "Sign-ins view")
active-directory Howto Install Use Log Analytics Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md
Title: How to install and use the log analytics views | Microsoft Docs description: Learn how to install and use the log analytics views for Azure Active Directory -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
# Install and use the log analytics views for Azure Active Directory
-The Azure Active Directory log analytics views helps you analyze and search the Azure AD activity logs in your Azure AD tenant. Azure AD activity logs include:
+The Azure Active Directory (Azure AD) log analytics views helps you analyze and search the Azure AD activity logs in your Azure AD tenant. Azure AD activity logs include:
* Audit logs: The [audit logs activity report](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. * Sign-in logs: With the [sign-in activity report](concept-sign-ins.md), you can determine who performed the tasks that are reported in the audit logs.
To use the log analytics views, you need:
## Install the log analytics views
-1. Navigate to your Log Analytics workspace. To do this, first navigate to the [Azure portal](https://portal.azure.com) and select **All services**. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites.
-2. Select **View Designer**, select **Import** and then select **Choose File** to import the views from your local computer.
-3. Select the views you downloaded from the prerequisites and select **Save** to save the import. Do this for the **Azure AD Account Provisioning Events** view and the **Sign-ins Events** view.
+1. Navigate to the [Azure portal](https://portal.azure.com) and select **All services**.
+1. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites.
+1. Select **View Designer** > **Import** > **Choose File** to import the views from your local computer.
+1. Select the views you downloaded from the prerequisites and select **Save** to save the import. Complete this step for the **Azure AD Account Provisioning Events** view and the **Sign-ins Events** view.
## Use the views
-1. Navigate to your Log Analytics workspace. To do this, first navigate to the [Azure portal](https://portal.azure.com) and select **All services**. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites.
+1. Navigate to the [Azure portal](https://portal.azure.com) and select **All services**.
+1. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites.
-2. Once you're in the workspace, select **Workspace Summary**. You should see the following three views:
+1. Once you're in the workspace, select **Workspace Summary**. You should see the following three views:
- * **Azure AD Account Provisioning Events**: This view shows reports related to auditing provisioning activity, such as the number of new users provisioned and provisioning failures, number of users updated and update failures and the number of users de-provisioned and corresponding failures.
- * **Sign-ins Events**: This view shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, as well as a summary view tracking the number of sign-ins over time.
+ * **Azure AD Account Provisioning Events**: This view shows reports related to the auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users de-provisioned and their corresponding failures.
+ * **Sign-ins Events**: This view shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, and a summary view tracking the number of sign-ins over time.
-3. Select either of these views to jump in to the individual reports. You can also set alerts on any of the report parameters. For example, let's set an alert for every time there's a sign-in error. To do this, first select the **Sign-ins Events** view, select **Sign-in errors over time** report and then select **Analytics** to open the details page, with the actual query behind the report.
+1. Select either of these views to jump in to the individual reports. You can also set alerts on any of the report parameters. For example, let's set an alert for every time there's a sign-in error.
+1. Select the **Sign-ins Events** > **Sign-in errors over time** > **Analytics** to open the details page, with the actual query behind the report.
![Screenshot shows the Analytics details page which has the query for the report.](./media/howto-install-use-log-analytics-views/details.png)
-4. Select **Set Alert**, and then select **Whenever the Custom log search is &lt;logic undefined&gt;** under the **Alert criteria** section. Since we want to alert whenever there's a sign-in error, set the **Threshold** of the default alert logic to **1** and then select **Done**.
+1. Select **Set Alert**, and then select **Whenever the Custom log search is &lt;logic undefined&gt;** under the **Alert criteria** section. Since we want to alert whenever there's a sign-in error, set the **Threshold** of the default alert logic to **1** and then select **Done**.
![Configure signal logic](./media/howto-install-use-log-analytics-views/configure-signal-logic.png)
-5. Enter a name and description for the alert and set the severity to **Warning**.
+1. Enter a name and description for the alert and set the severity to **Warning**.
![Create rule](./media/howto-install-use-log-analytics-views/create-rule.png)
-6. Select the action group to alert. In general, this can be either a team you want to notify via email or text message, or it can be an automated task using webhooks, runbooks, functions, logic apps or external ITSM solutions. Learn how to [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md).
+1. Select the action group to alert, such as a team you want to notify via email or text message. Learn how to [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md).
-7. Select **Create alert rule** to create the alert. Now you will be alerted every time there's a sign-in error.
+1. Select **Create alert rule** to create the alert. Now you'll be alerted every time there's a sign-in error.
## Next steps
active-directory Howto Integrate Activity Logs With Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md
Title: Integrate logs with ArcSight using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
In this article, you learn how to route Azure AD logs to ArcSight using Azure Mo
To use this feature, you need: * An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they are consequently sent to the SmartConnector by the Load Balancer.
+* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
-Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hub](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
+Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
## Integrate Azure AD logs with ArcSight
Download and open the [configuration guide for ArcSight SmartConnector for Azure
2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
-3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following:
+3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
* The requisite Azure functions are created in your Azure subscription. * The Azure AD logs are streamed to the correct destination. * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
-4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform additional configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
+4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
-5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There is also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
+5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
## Next steps
-[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hub](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292)
+[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292)
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
Title: Integrate Splunk using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor. -+ - Previously updated : 08/22/2022- Last updated : 10/31/2022+
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
Title: Stream logs to SumoLogic using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor. -+ - Previously updated : 08/22/2022- Last updated : 10/31/2022+
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Title: How to manage inactive user accounts in Azure AD | Microsoft Docs description: Learn about how to detect and handle user accounts in Azure AD that have become obsolete -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
active-directory Howto Troubleshoot Sign In Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md
Title: How to troubleshoot sign-in errors reports | Microsoft Docs description: Learn how to troubleshoot sign-in errors using Azure Active Directory reports in the Azure portal -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
You need:
![Filter results](./media/howto-troubleshoot-sign-in-errors/filters.png)
-4. Identify the failed sign-in you want to investigate. Select it to open up the additional details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**.
+4. Identify the failed sign-in you want to investigate. Select it to open up the other details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**.
![Select record](./media/howto-troubleshoot-sign-in-errors/sign-in-failures.png)
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
Title: Azure Monitor workbooks for reports | Microsoft Docs description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ # How to use Azure Monitor workbooks for Azure Active Directory reports
As an IT admin, you need powerful tools to turn the data about your Azure AD ten
This article gives you an overview of how you can use Azure Monitor workbooks for Azure Active Directory reports to analyze your Azure AD tenant.
-## What it is
+## What is Azure Monitor workbooks for Azure AD reports?
Azure AD tracks all activities in your Azure AD in the activity logs. The data in your Azure AD logs enables you to assess how your Azure AD is doing. The Azure Active Directory portal gives you access to three activity logs:
Azure AD tracks all activities in your Azure AD in the activity logs. The data i
Using the access capabilities provided by the Azure portal, you can review the information that is tracked in your activity logs. This option is helpful if you need to do a quick investigation of an event with a limited scope. For example, a user had trouble signing in during a period of a few hours. In this scenario, reviewing the recent records of this user in the sign-in logs can help to shed light on this issue.
-For one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. However, there are also business problems requiring a more complex analysis of the data in your activity logs. This is, for example, true if you're watching for trends in signals of interest. One common example for a scenario that requires a trend analysis is related to blocking legacy authentication in your Azure AD tenant.
+For one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. However, there are also business problems requiring a more complex analysis of the data in your activity logs. One common example for a scenario that requires a trend analysis is related to blocking legacy authentication in your Azure AD tenant.
Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, a widely used industry-standard method for collecting user name and password information. Examples of applications that commonly or only use legacy authentication are:
Azure AD supports several of the most widely used authentication and authorizati
Typically, legacy authentication clients can't enforce any type of second factor authentication. However, multi-factor authentication (MFA) is a common requirement in many environments to provide a high level of protection.
-How can you determine whether it is safe to block legacy authentication in an environment? Answering this question requires an analysis of the sign-ins in your environment for a certain timeframe. This is a scenario where Azure Monitor workbooks can help you.
-
-Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
+How can you determine whether it's safe to block legacy authentication in an environment? Answering this question requires an analysis of the sign-ins in your environment for a certain timeframe. Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
With Azure Monitor workbooks, you can:
When working with workbooks, you can either start with an empty workbook, or use
There are: -- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) that serve as a good starting point when you are just getting started with workbooks.
+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) that serve as a good starting point when you're just getting started with workbooks.
- **Private templates** when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.
To use Monitor workbooks, you need:
- A [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). - [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace-- Following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal)
+- Following roles in Azure Active Directory (if you're accessing Log Analytics through Azure Active Directory portal)
- Security administrator - Security reader - Report reader
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
Title: Azure Active Directory (Azure AD) audit activity reference | Microsoft Docs description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory (Azure AD). -+ - Previously updated : 08/26/2022- Last updated : 10/28/2022+
The reporting architecture in Azure AD consists of the following components:
- [Audit logs](concept-audit-logs.md) - Provides traceability through logs for all changes done by various features within Azure AD. - **Security reports**
- - [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+ - [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who isn't the legitimate owner of a user account.
- [Users flagged for risk](../identity-protection/overview-identity-protection.md) - A risky user is an indicator for a user account that might have been compromised.
-This articles lists the audit activities that can be logged in your audit logs.
+This article lists the audit activities that can be logged in your audit logs.
## Access reviews
This articles lists the audit activities that can be logged in your audit logs.
|Access Reviews|Remove reviewer from access review| |Access Reviews|Request Stop Review| |Access Reviews|Request apply review result|
-|Access Reviews|Review Rbac Role membership|
+|Access Reviews|Review RBAC Role membership|
|Access Reviews|Review app assignment| |Access Reviews|Review group membership| |Access Reviews|Review request approval request|
This articles lists the audit activities that can be logged in your audit logs.
|Authentication|Create IdentityProvider| |Authentication|Create V1 application| |Authentication|Create V2 application|
-|Authentication|Create a custom domains in the tenant|
+|Authentication|Create a custom domain in the tenant|
|Authorization|Create a new AdminUserJourney| |Authorization|Create localized resource json| |Authorization|Create new Custom IDP|
This articles lists the audit activities that can be logged in your audit logs.
|Authorization|Update policy| |Authorization|Update user attribute| |Authorization|Upload a CPIM encrypted key|
-|Authorization|User Authorization: API is disabled for tenant featureset|
+|Authorization|User Authorization: API is disabled for tenant feature set|
|Authorization|User Authorization: User granted access as 'Tenant Admin'| |Authorization|User Authorization: User was granted 'Authenticated Users' access rights| |Authorization|Verify if B2C feature is enabled|
This articles lists the audit activities that can be logged in your audit logs.
|Authorization|Onboard to Azure AD Access Reviews| |Authorization|Unlink program control| |Authorization|Update program|
-|Authorization|Disable Desktop Sso|
-|Authorization|Disable Desktop Sso for a specific domain|
+|Authorization|Disable Desktop SSO|
+|Authorization|Disable Desktop SSO for a specific domain|
|Authorization|Disable application proxy| |Authorization|Disable passthrough authentication|
-|Authorization|Enable Desktop Sso|
-|Directory Management|Enable Desktop Sso for a specific domain|
+|Authorization|Enable Desktop SSO|
+|Directory Management|Enable Desktop SSO for a specific domain|
|Directory Management|Enable application proxy| |Directory Management|Enable passthrough authentication| |Directory Management|Create a custom domains in the tenant|
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
Title: Azure Active Directory SLA performance | Microsoft Docs description: Learn about the Azure AD SLA performance - Previously updated : 09/08/2022 Last updated : 10/31/2022
# Azure Active Directory SLA performance
-As an identity admin, you may need to track Azure AD's service-level agreement (SLA) performance to make sure Azure AD can support your vital apps. This article shows how the Azure AD service has performed according to the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
+As an identity admin, you may need to track the Azure Active Directory (Azure AD) service-level agreement (SLA) performance to make sure Azure AD can support your vital apps. This article shows how the Azure AD service has performed according to the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
You can use this article in discussions with app or business owners to help them understand the performance they can expect from Azure AD. - ## Service availability commitment Microsoft offers Premium Azure AD customers the opportunity to get a service credit if Azure AD fails to meet the documented SLA. When you request a service credit, Microsoft evaluates the SLA for your specific tenant; however, this global SLA can give you an indication of the general health of Azure AD over time. The SLA covers the following scenarios that are vital to businesses: -- **User authentication:** Users are able to login to the Azure Active Directory service.
+- **User authentication:** Users are able to sign in to the Azure AD service.
-- **App access:** Azure Active Directory successfully emits the authentication and authorization tokens required for users to log into applications connected to the service.
+- **App access:** Azure AD successfully emits the authentication and authorization tokens required for users to sign in to applications connected to the service.
For full details on SLA coverage and instructions on requesting a service credit, see the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
You rely on Azure AD to provide identity and access management for your vital sy
To help you plan for moving workloads to Azure AD, we publish past SLA performance. These numbers show the level at which Azure AD met the requirements in the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/), for all tenants.
-For each month, we truncate the SLA attainment at three places after the decimal. Numbers are not rounded up, so actual SLA attainment is higher than indicated.
-
+The SLA attainment is truncated at three places after the decimal. Numbers are not rounded up, so actual SLA attainment is higher than indicated.
| Month | 2021 | 2022 | | | | |
For each month, we truncate the SLA attainment at three places after the decimal
| November | 99.998% | | | December | 99.978% | | -- ### How is Azure AD SLA measured? The Azure AD SLA is measured in a way that reflects customer authentication experience, rather than simply reporting on whether the system is available to outside connections. This means that the calculation is based on whether:
The Azure AD SLA is measured in a way that reflects customer authentication expe
The numbers above are a global total of Azure AD authentications across all customers and geographies. - ## Incident history All incidents that seriously impact Azure AD performance are documented in the [Azure status history](https://azure.status.microsoft/status/history/). Not all events documented in Azure status history are serious enough to cause Azure AD to go below its SLA. You can view information about the impact of incidents, as well as a root cause analysis of what caused the incident and what steps Microsoft took to prevent future incidents.
-
-- ## Next steps * [Azure AD reports overview](overview-reports.md)
active-directory Reference Azure Monitor Sign Ins Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md
Title: Sign-in log schema in Azure Monitor | Microsoft Docs
-description: Describe the Azure AD sign in log schema for use in Azure Monitor
+description: Describe the Azure AD sign-in log schema for use in Azure Monitor
-+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
# Interpret the Azure AD sign-in logs schema in Azure Monitor
-This article describes the Azure Active Directory (Azure AD) sign-in log schema in Azure Monitor. Most of the information that's related to sign-ins is provided under the *Properties* attribute of the `records` object.
+This article describes the Azure Active Directory (Azure AD) sign-in log schema in Azure Monitor. Information related to sign-ins is provided under the *Properties* attribute of the `records` object.
```json
This article describes the Azure Active Directory (Azure AD) sign-in log schema
| ResultDescription | N/A or blank | Provides the error description for the sign-in operation. | | riskDetail | riskDetail | Provides the 'reason' behind a specific state of a risky user, sign-in or a risk detection. The possible values are: `none`, `adminGeneratedTemporaryPassword`, `userPerformedSecuredPasswordChange`, `userPerformedSecuredPasswordReset`, `adminConfirmedSigninSafe`, `aiConfirmedSigninSafe`, `userPassedMFADrivenByRiskBasedPolicy`, `adminDismissedAllRiskForUser`, `adminConfirmedSigninCompromised`, `unknownFutureValue`. The value `none` means that no action has been performed on the user or sign-in so far. <br>**Note:** Details for this property require an Azure AD Premium P2 license. Other licenses return the value `hidden`. | | riskEventTypes | riskEventTypes | Risk detection types associated with the sign-in. The possible values are: `unlikelyTravel`, `anonymizedIPAddress`, `maliciousIPAddress`, `unfamiliarFeatures`, `malwareInfectedIPAddress`, `suspiciousIPAddress`, `leakedCredentials`, `investigationsThreatIntelligence`, `generic`, and `unknownFutureValue`. |
-| authProcessingDetails | Azure AD app authentication library | Contains Family, Library, and Platform information in format: "Family: ADAL Library: ADAL.JS 1.0.0 Platform: JS" |
+| authProcessingDetails | Azure AD app authentication library | Contains Family, Library, and Platform information in format: "Family: Microsoft Authentication Library: ADAL.JS 1.0.0 Platform: JS" |
| authProcessingDetails | IsCAEToken | Values are True or False |
-| riskLevelAggregated | riskLevel | Aggregated risk level. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. |
-| riskLevelDuringSignIn | riskLevel | Risk level during sign-in. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. |
+| riskLevelAggregated | riskLevel | Aggregated risk level. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in wasn't enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. |
+| riskLevelDuringSignIn | riskLevel | Risk level during sign-in. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in wasn't enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. |
| riskState | riskState | Reports status of the risky user, sign-in, or a risk detection. The possible values are: `none`, `confirmedSafe`, `remediated`, `dismissed`, `atRisk`, `confirmedCompromised`, `unknownFutureValue`. | | DurationMs | - | This value is unmapped, and you can safely ignore this field. | | CallerIpAddress | - | The IP address of the client that made the request. |
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
Title: Basic info in the Azure AD sign-in logs | Microsoft Docs description: Learn what the basic info in the sign-in logs is about. -+ - Previously updated : 08/26/2022- Last updated : 10/28/2022+
# Basic info in the Azure AD sign-in logs
-Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly.
+Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. [Learn how to access, view, and analyze Azure AD sign-in logs](concept-sign-ins.md)
This article explains the values on the Basic info tab of the sign-ins log.
In Azure AD, a resource access has three relevant components:
- **How** ΓÇô The client (Application) used for the access. - **What** ΓÇô The target (Resource) accessed by the identity. - Each component has an associated unique identifier (ID). Below is an example of user using the Microsoft Azure classic deployment model to access the Azure portal. ![Open audit logs](./media/reference-basic-info-sign-in-logs/sign-in-details-basic-info.png)
-### Tenant identifiers
+### Tenant
The sign-in log tracks two tenant identifiers:
The request ID is an identifier that corresponds to an issued token. If you are
The correlation ID groups sign-ins from the same sign-in session. The identifier was implemented for convenience. Its accuracy is not guaranteed because the value is based on parameters passed by a client.
+### Sign-in
-## Sign-in identifier
-
-The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a UPN, but can be another identifier such as a phone number.
+The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number.
+### Authentication requirement
-## Authentication requirement
+This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. Graph API supports `$filter` (`eq` and `startsWith` operators only).
-This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. In the Graph API, supports `$filter` (`eq` and `startsWith` operators only).
-
-## Sign-in event types
+### Sign-in event types
Indicates the category of the sign in the event represents. For user sign-ins, the category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource. For managed identity sign-ins, the category is `managedIdentity`. For service principal sign-ins, the category is **servicePrincipal**. The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are:
Indicates the category of the sign in the event represents. For user sign-ins, t
The Microsoft Graph API, supports: `$filter` (`eq` operator only)
-## User type
+### User type
The type of a user. Examples include `member`, `guest`, or `external`.
-## Cross-tenant access type
+### Cross-tenant access type
This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are:
This attribute describes the type of cross-tenant access used by the actor to ac
If the sign-in did not the pass the boundaries of a tenant, the value is `none`.
-## Conditional access evaluation
+### Conditional access evaluation
This value shows whether continuous access evaluation (CAE) was applied to the sign-in event. There are multiple sign-in requests for each authentication. Some are shown on the interactive tab, while others are shown on the non-interactive tab. CAE is only displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md). ------------------------------ ## Next steps
-* [Sign-in logs in Azure Active Directory](concept-sign-ins.md)
-* [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md)
+* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md)
+* [Explore the sign-in diagnostic in Azure AD](overview-sign-in-diagnostics.md)
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
Title: Azure AD PowerShell cmdlets for reporting | Microsoft Docs description: Reference of the Azure AD PowerShell cmdlets for reporting. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
> [!NOTE] > These PowerShell cmdlets currently only work with the [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#directory_auditing) Module. Please note that the preview module is not suggested for production use.
-To install the public preview release, use the following.
+To install the public preview release, use the following:
```powershell Install-module AzureADPreview ```
-For more information on how to connect to Azure AD using PowerShell, please see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2).
+For more information on how to connect to Azure AD using PowerShell, see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2).
With Azure Active Directory (Azure AD) reports, you can get details on activities around all the write operations in your direction (audit logs) and authentication data (sign-in logs). Although the information is available by using the MS Graph API, now you can retrieve the same data by using the Azure AD PowerShell cmdlets for reporting.
active-directory Reference Reports Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md
Title: How long does Azure AD store reporting data? | Microsoft Docs
description: Learn how long Azure stores the various types of reporting data. documentationcenter: ''-+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
# How long does Azure AD store reporting data?
-In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory.
+In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory (Azure AD).
### When does Azure AD start collecting data? | Azure AD Edition | Collection Start | | :-- | :-- | | Azure AD Premium P1 <br /> Azure AD Premium P2 | When you sign up for a subscription |
-| Azure AD Free| The first time you open the [Azure Active Directory blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) |
+| Azure AD Free| The first time you open [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) |
You can retain the audit and sign-in activity data for longer than the default r
### Can I see last month's data after getting an Azure AD premium license?
-**No**, you can't. Azure stores up to seven days of activity data for a free version. This means, when you switch from a free to a premium version, you can only see up to 7 days of data.
+**No**, you can't. Azure stores up to seven days of activity data for a free version. When you switch from a free to a premium version, you can only see up to 7 days of data.
active-directory Reference Reports Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-latencies.md
Title: Azure Active Directory reporting latencies | Microsoft Docs description: Learn about the amount of time it takes for reporting events to show up in your Azure portal -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
If you already have activities data with your free license, then you can see it
There are two types of security reports: -- [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+- [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who isn't the legitimate owner of a user account.
- [Users flagged for risk](../identity-protection/overview-identity-protection.md) - A risky user is an indicator for a user account that might have been compromised. The following table lists the latency information for security reports.
active-directory Troubleshoot Audit Data Verified Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-audit-data-verified-domain.md
Title: 'Troubleshoot audit data of verified domain change | Microsoft Docs' description: Provides you with information that will appear in the Azure Active Directory activity logs when you change a users verified domain. -+ Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Troubleshoot: Audit data on verified domain change
-## I have a lot of changes to my users and I am not sure what the cause of it is.
+## I have a lot of changes to my users and I'm not sure what the cause of it is.
### Symptoms
-I check the Azure AD audit logs, and see multiple user updates occurring in my Azure AD tenant. These **Update User** events do not display **Actor** information, which causes uncertainty as to what/who triggered the mass changes to users.
+I check the Azure AD audit logs, and see multiple user updates occurring in my Azure AD tenant. These **Update User** events don't display **Actor** information, which causes uncertainty as to what/who triggered the mass changes to users.
### Cause
- A common reason behind mass object changes is a non-synchronous backend operation called **ProxyCalc**. **ProxyCalc** is the logic that determines the appropriate **UserPrincipalName** and **Proxy Addresses**, that are updated in Azure AD users, groups or contacts. The design behind **ProxyCalc** is to ensure that all **UserPrincipalName** and **Proxy Addresses** are consistent in Azure AD at any time. **ProxyCalc** must be triggered by an explicit change like a verified domain change and does not perpetually run in the background as a task.
+ A common reason behind mass object changes is a non-synchronous backend operation called **ProxyCalc**. **ProxyCalc** is the logic that determines the appropriate **UserPrincipalName** and **Proxy Addresses** that are updated in Azure AD users, groups, or contacts. The design behind **ProxyCalc** is to ensure that all **UserPrincipalName** and **Proxy Addresses** are consistent in Azure AD at any time. **ProxyCalc** must be triggered by an explicit change like a verified domain change and doesn't perpetually run in the background as a task.
One of the admin tasks that could trigger **ProxyCalc** is whenever thereΓÇÖs a
For example, if you add a verified domain Fabrikam.com to your Contoso.onmicrosoft.com tenant, this action will trigger a ProxyCalc operation on all objects in the tenant. This event will be captured in the Azure AD Audit logs as **Update User** events preceded by an **Add verified domain** event. On the other hand, if Fabrikam.com was removed from the Contoso.onmicrosoft.com tenant, then all the **Update User** events will be preceded by a **Remove verified domain** event.
-#### Additional notes:
+#### Notes:
-ProxyCalc does not cause changes to certain objects that:
+ProxyCalc doesn't cause changes to certain objects that:
-- do not have an active Exchange license
+- don't have an active Exchange license
- have **MSExchRemoteRecipientType** set to Null -- are not considered a shared resource. Shared Resource is when **CloudMSExchRecipientDisplayType** contains one of the following values: **MailboxUser (shared)**, **PublicFolder**, **ConferenceRoomMailbox**, **EquipmentMailbox**, **ArbitrationMailbox**, **RoomList**, **TeamMailboxUser**, **Group mailbox**, **Scheduling mailbox**, **ACLableMailboxUser**, **ACLableTeamMailboxUser**
+- aren't considered a shared resource. Shared Resource is when **CloudMSExchRecipientDisplayType** contains one of the following values: **MailboxUser (shared)**, **PublicFolder**, **ConferenceRoomMailbox**, **EquipmentMailbox**, **ArbitrationMailbox**, **RoomList**, **TeamMailboxUser**, **Group mailbox**, **Scheduling mailbox**, **ACLableMailboxUser**, **ACLableTeamMailboxUser**
In order to build more correlation between these two disparate events, Microsoft is working on updating the **Actor** info in the audit logs to identify these changes as triggered by a verified domain change. This action will help check when the verified domain change event took place and started to mass update the objects in their tenant.
-Additionally, in most cases, there are no changes to users as their **UserPrincipalName** and **Proxy Addresses** are consistent, so we are working to display in Audit Logs only those updates that caused an actual change to the object. This action will prevent noise in the audit logs and help admins correlate the remaining user changes to verified domain change event as explained above.
+Additionally, in most cases, there are no changes to users as their **UserPrincipalName** and **Proxy Addresses** are consistent, so we're working to display in Audit Logs only those updates that caused an actual change to the object. This action will prevent noise in the audit logs and help admins correlate the remaining user changes to verified domain change event as explained above.
## Next Steps
active-directory Troubleshoot Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md
Title: 'Troubleshoot errors in Azure Active Directory reporting API | Microsoft Docs' description: Provides you with a resolution to errors while calling Azure Active Directory Reporting APIs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) lice
If you see this error message while trying to access audit logs or sign-ins using the API, make sure that your account is part of the **Security Reader** or **Report Reader** role in your Azure Active Directory tenant.
-### Error: Application missing AAD 'Read directory data' permission
+### Error: Application missing Azure AD 'Read directory data' permission
Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
active-directory Troubleshoot Missing Audit Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md
Title: 'Troubleshoot Missing data in activity logs | Microsoft Docs' description: Provides you with a resolution to missing data in Azure Active Directory activity logs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
### Symptoms
-I performed some actions in the Azure portal and expected to see the audit logs for those actions in the `Activity logs > Audit Logs` blade, but I canΓÇÖt find them.
+I performed some actions in the Azure portal and expected to see the audit logs for those actions in the `Activity logs > Audit Logs`, but I canΓÇÖt find them.
![Screenshot shows Audit Log entries.](./media/troubleshoot-missing-audit-data/01.png)
Actions donΓÇÖt appear immediately in the activity logs. The table below enumera
### Resolution
-Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, please [file a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we will look into it.
+Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, [file a support request,](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we'll look into it.
## I canΓÇÖt find recent user sign-ins in the Azure Active Directory sign-ins activity log ### Symptoms
-I recently signed into the Azure portal and expected to see the sign-in logs for those actions in the `Activity logs > Sign-ins` blade, but I canΓÇÖt find them.
+I recently signed into the Azure portal and expected to see the sign-in logs for those actions in the `Activity logs > Sign-ins`, but I canΓÇÖt find them.
![Screenshot shows Sign-ins in the Activity log.](./media/troubleshoot-missing-audit-data/02.png)
Actions donΓÇÖt appear immediately in the activity logs. The table below enumera
### Resolution
-Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, please [file a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we will look into it.
+Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, [file a support request,](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we'll look into it.
## I can't view more than 30 days of report data in the Azure portal
Depending on your license, Azure Active Directory Actions stores activity report
| Report | Azure AD Free | Azure AD Premium P1 | Azure AD Premium P2 | | | | | | | Directory Audit | 7 days | 30 days | 30 days |
-| Sign-in Activity | Not available. You can access your own sign-ins for 7 days from the individual user profile blade | 30 days | 30 days |
+| Sign-in Activity | Not available. You can access your own sign-ins for 7 days from the individual user profile | 30 days | 30 days |
For more information, see [Azure Active Directory report retention policies](reference-reports-data-retention.md).
active-directory Troubleshoot Missing Data Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md
Title: 'Troubleshooting: Missing data in the downloaded activity logs | Microsoft Docs' description: Provides you with a resolution to missing data in downloaded Azure Active Directory activity logs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
When you download activity logs in the Azure portal, we limit the scale to 250,0
## Resolution
-You can leverage [Azure AD Reporting APIs](concept-reporting-api.md) to fetch up to a million records at any given point.
+You can use [Azure AD Reporting APIs](concept-reporting-api.md) to fetch up to a million records at any given point.
## Next steps
active-directory Workbook Authentication Prompts Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-authentication-prompts-analysis.md
Title: Authentication prompts analysis workbook in Azure AD | Microsoft Docs description: Learn how to use the authentication prompts analysis workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
This article provides you with an overview of this workbook.
Have you recently heard of complaints from your users about getting too many authentication prompts?
-Over prompting users impacts your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users.
+Overprompting users can affect your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users.
Typically, this scenario is caused by:
In many environments, the most used apps are business productivity apps. Anythin
![Authentication prompts by application](./media/workbook-authentication-prompts-analysis/authentication-prompts-by-application.png)
-The prompts by application list-view shows additional information such as timestamps, and request IDs that help with investigations.
+The prompts by application list view shows additional information such as timestamps, and request IDs that help with investigations.
Additionally, you get a summary of the average and median prompts count for your tenant.
Filtering for a specific user that has many authentication requests or only show
## Best practices
-If data isn't showing up or seems to be showing up incorrectly, please confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.
+If data isn't showing up or seems to be showing up incorrectly, confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.
![Set workspace and subscriptions](./media/workbook-authentication-prompts-analysis/workspace-and-subscriptions.png)
If the visuals are taking too much time to load, try reducing the Time filter to
## Next steps -- To understand more about the different policies that impact MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+- To understand more about the different policies that affect MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
-- To learn more about the different vulnerabilities of different MFA methods, see [All your creds are belong to us!](https://aka.ms/allyourcreds).
+- To learn more about the different vulnerabilities of different MFA methods, see [All your creds belong to us!](https://aka.ms/allyourcreds).
- To learn how to move users from telecom-based methods to the Authenticator app, see [How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator app](../authentication/how-to-mfa-registration-campaign.md).
active-directory Workbook Conditional Access Gap Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-conditional-access-gap-analyzer.md
Title: Conditional access gap analyzer workbook in Azure AD | Microsoft Docs description: Learn how to use the conditional access gap analyzer workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
The workbook has four sections:
- Users signing in using legacy authentication -- Number of sign-ins by applications that are not impacted by conditional access policies
+- Number of sign-ins by applications that aren't impacted by conditional access policies
- High risk sign-in events bypassing conditional access policies -- Number of sign-ins by location that were not affected by conditional access policies
+- Number of sign-ins by location that weren't affected by conditional access policies
![Conditional access coverage by location](./media/workbook-conditional-access-gap-analyzer/conditianal-access-by-location.png)
active-directory Workbook Cross Tenant Access Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md
Title: Cross-tenant access activity workbook in Azure AD | Microsoft Docs description: Learn how to use the cross-tenant access activity workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md
Title: Sign-ins using legacy authentication workbook in Azure AD | Microsoft Docs description: Learn how to use the sign-ins using legacy authentication workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
# Sign-ins using legacy authentication workbook
-Have you ever wondered how you can determine whether it is safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question.
+Have you ever wondered how you can determine whether it's safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question.
This article gives you an overview of this workbook.
Examples of applications that commonly or only use legacy authentication are:
- Apps using legacy auth with mail protocols like POP, IMAP, and SMTP AUTH.
-Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they are easy to guess and humans are bad at choosing good passwords.
+Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they're easy to guess and humans are bad at choosing good passwords.
Unfortunately, legacy authentication: -- Does not support multi-factor authentication (MFA) or other strong authentication methods.
+- Doesn't support multi-factor authentication (MFA) or other strong authentication methods.
- Makes it impossible for your organization to move to passwordless authentication.
This workbook supports multiple filters:
- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). -- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
+- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it's using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
## Next steps
active-directory Workbook Risk Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md
Title: Identity protection risk analysis workbook in Azure AD | Microsoft Docs description: Learn how to use the identity protection risk analysis workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
active-directory Workbook Sensitive Operations Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-sensitive-operations-report.md
Title: Sensitive operations report workbook in Azure AD | Microsoft Docs description: Learn how to use the sensitive operations report workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
# Sensitive operations report workbook
-As an It administrator, you need to be able to identify compromises in your environment to ensure that you can keep it in a healthy state.
+As an IT administrator, you need to be able to identify compromises in your environment to ensure that you can keep it in a healthy state.
The sensitive operations report workbook is intended to help identify suspicious application and service principal activity that may indicate compromises in your environment.
This article provides you with an overview of this workbook.
This workbook identifies recent sensitive operations that have been performed in your tenant and which may service principal compromise.
-If your organization is new to Azure monitor workbooks, you need to integrate your Azure AD sign-in and audit logs with Azure Monitor before accessing the workbook. This allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored, so the workbook will not contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Azure Active Directory. If you have previously integrated your Azure AD sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information.
+If your organization is new to Azure monitor workbooks, you need to integrate your Azure AD sign-in and audit logs with Azure Monitor before accessing the workbook. This integration allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored, so the workbook won't contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Azure Active Directory. If you've previously integrated your Azure AD sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information.
This workbook is split into four sections:
![Workbook sections](./media/workbook-sensitive-operations-report/workbook-sections.png) -- **Modified application and service principal credentials/authentication methods** - This report flags actors who have recently changed many service principal credentials, as well as how many of each type of service principal credentials have been changed.
+- **Modified application and service principal credentials/authentication methods** - This report flags actors who have recently changed many service principal credentials, and how many of each type of service principal credentials have been changed.
- **New permissions granted to service principals** - This workbook also highlights recently granted OAuth 2.0 permissions to service principals.
This section includes the following data to help you detect:
- All new credentials added to apps and service principals, including the credential type -- Top actors and the amount of credentials modifications they performed
+- Top actors and the number of credentials modifications they performed
- A timeline for all credential changes
This section includes the following data to help you detect:
### New permissions granted to service principals
-In cases where the attacker cannot find a service principal or an application with a high privilege set of permissions through which to gain access, they will often attempt to add the permissions to another service principal or app.
+In cases where the attacker can't find a service principal or an application with a high privilege set of permissions through which to gain access, they'll often attempt to add the permissions to another service principal or app.
This section includes a breakdown of the AppOnly permissions grants to existing service principals. Admins should investigate any instances of excessive high permissions being granted, including, but not limited to, Exchange Online, Microsoft Graph and Azure AD Graph.
This section includes an overview of all changes made to service principal membe
Another common approach to gain a long-term foothold in the environment is to: - Modify the tenantΓÇÖs federated domain trusts.-- Add an additional SAML IDP that is controlled by the attacker as a trusted authentication source.
+- Add another SAML IDP that is controlled by the attacker as a trusted authentication source.
This section includes the following data:
This paragraph lists the supported filters for each section.
**Use:** -- **Modified application and service principal credentials** to look out for credentials being added to service principals that are not frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified.
+- **Modified application and service principal credentials** to look out for credentials being added to service principals that aren't frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified.
- **New permissions granted to service principals** to look out for broad or excessive permissions being added to service principals by actors that may be compromised.
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS)
description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS) Previously updated : 03/05/2020 Last updated : 10/31/2022
Most stateless applications in AKS should use the deployment model rather than s
You don't want to disrupt management decisions with an update process if your application requires a minimum number of available instances. *Pod Disruption Budgets* define how many replicas in a deployment can be taken down during an update or node upgrade. For example, if you have *five (5)* replicas in your deployment, you can define a pod disruption of *4 (four)* to only allow one replica to be deleted or rescheduled at a time. As with pod resource limits, best practice is to define pod disruption budgets on applications that require a minimum number of replicas to always be present.
-Deployments are typically created and managed with `kubectl create` or `kubectl apply`. Create a deployment by defining a manifest file in the YAML format.
+Deployments are typically created and managed with `kubectl create` or `kubectl apply`. Create a deployment by defining a manifest file in the YAML format.
The following example creates a basic deployment of the NGINX web server. The deployment specifies *three (3)* replicas to be created, and requires port *80* to be open on the container. Resource requests and limits are also defined for CPU and memory.
spec:
memory: 256Mi ```
+A breakdown of the deployment specifications in the YAML manifest file is as follows:
+
+| Specification | Description |
+| -- | - |
+| `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. |
+| `.kind` | Specifies the type of resource you want to create. |
+| `.metadata.name` | Specifies the image to run. This file will run the *nginx* image from Docker Hub. |
+| `.spec.replicas` | Specifies how many pods to create. This file will create three deplicated pods. |
+| `.spec.selector` | Specifies which pods will be affected by this deployment. |
+| `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allows the deployment to find and manage the created pods. |
+| `.spec.selector.matchLabels.app` | Has to match `.spec.template.metadata.labels`. |
+| `.spec.template.labels` | Specifies the *{key, value}* pairs attached to the object. |
+| `.spec.template.app` | Has to match `.spec.selector.matchLabels`. |
+| `.spec.spec.containers` | Specifies the list of containers belonging to the pod. |
+| `.spec.spec.containers.name` | Specifies the name of the container specified as a DNS label. |
+| `.spec.spec.containers.image` | Specifies the container image name. |
+| `.spec.spec.containers.ports` | Specifies the list of ports to expose from the container. |
+| `.spec.spec.containers.ports.containerPort` | Specifies the number of port to expose on the pod's IP address. |
+| `.spec.spec.resources` | Specifies the compute resources required by the container. |
+| `.spec.spec.resources.requests` | Specifies the minimum amount of compute resources required. |
+| `.spec.spec.resources.requests.cpu` | Specifies the minimum amount of CPU required. |
+| `.spec.spec.resources.requests.memory` | Specifies the minimum amount of memory required. |
+| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. This limit is enforced by the kubelet. |
+| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. This limit is enforced by the kubelet. |
+| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. This limit is enforced by the kubelet. |
+ More complex applications can be created by including services (such as load balancers) within the YAML manifest. For more information, see [Kubernetes deployments][kubernetes-deployments].
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Title: How to deploy Azure Container offers for Azure Kubernetes Service (AKS) from the Azure Marketplace
-description: Learn how to deploy Azure Container offers from the Azure Marketplace on an Azure Kubernetes Service (AKS) cluster.
+ Title: Deploy an Azure container offer from Azure Marketplace
+description: Learn how to deploy Azure container offers from Azure Marketplace on an Azure Kubernetes Service (AKS) cluster.
Last updated 09/30/2022
-# How to deploy a Container offer from Azure Marketplace (preview)
+# Deploy a container offer from Azure Marketplace (preview)
-[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace you can find, try, buy, and deploy the software and services you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and also consulting services from Microsoft partners.
+[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners.
-Included among these solutions are Kubernetes application-based Container offers, which contain applications meant to run on Kubernetes clusters such as Azure Kubernetes Service (AKS). In this article, you will learn how to:
+Included among these solutions are Kubernetes application-based container offers. These offers contain applications that are meant to run on Kubernetes clusters such as Azure Kubernetes Service (AKS). In this article, you'll learn how to:
-- Browse offers in Azure Marketplace-- Purchase an application-- Deploy the application on your AKS cluster-- Monitor usage and billing information
+- Browse offers in Azure Marketplace.
+- Purchase an application.
+- Deploy the application on your AKS cluster.
+- Monitor usage and billing information.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE]
-> This feature is currently only supported in the following regions:
+> This feature is currently supported only in the following regions:
> > - West Central US > - West Europe
-> - East US.
+> - East US
## Register resource providers
-You must have registered the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription using the `az provider register` command:
+Before you deploy a container offer, you must register the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription by using the `az provider register` command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService --wait az provider register --namespace Microsoft.KubernetesConfiguration --wait ```
-## Browse offers
+## Select and deploy a Kubernetes offer
-- Begin by visiting the Azure portal and searching for *"Marketplace"* in the top search bar.
+1. In the [Azure portal](https://ms.portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**.
-- You can search for an offer or publisher directly by name or browse all offers. To find Kubernetes application offers, use the *Product type* filter for *Azure Containers*.
+1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, use the **Product Type** filter for **Azure Containers**.
-- > [!IMPORTANT]
- > The *Azure Containers* category includes both Kubernetes applications and standalone container images. This walkthrough is Kubernetes application-specific. If you find the steps to deploy an offer differ in some way, you are most likely trying to deploy a container image-based offer instead of a Kubernetes-application based offer.
- >
- > To ensure you're searching for Kubernetes applications, include the term `KubernetesApps` in your search.
+ :::image type="content" source="./media/deploy-marketplace/browse-marketplace-inline.png" alt-text="Screenshot of Azure Marketplace offers in the Azure portal, with the filter for product type set to Azure containers." lightbox="./media/deploy-marketplace/browse-marketplace-full.png":::
-- Once you've decided on an application, click on the offer.
+ > [!IMPORTANT]
+ > The **Azure Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer.
+ >
+ > To ensure that you're searching for Kubernetes applications, include the term **KubernetesApps** in your search.
- :::image type="content" source="./media/deploy-marketplace/browse-marketplace-inline.png" alt-text="Screenshot of the Azure portal Marketplace offer page. The product type filter, set to Azure Containers, is highlighted and several offers are shown." lightbox="./media/deploy-marketplace/browse-marketplace-full.png":::
+1. After you decide on an application, select the offer.
-## Purchasing a Kubernetes offer
+1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
-- Review the plan and prices tab, select an option, and ensure the terms are acceptable before proceeding.
+ :::image type="content" source="./media/deploy-marketplace/plans-pricing-inline.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, including plan and pricing information." lightbox="./media/deploy-marketplace/plans-pricing-full.png":::
- :::image type="content" source="./media/deploy-marketplace/plans-pricing-inline.png" alt-text="Screenshot of the Azure portal offer purchasing page. The tab for viewing plans and pricing information is shown." lightbox="./media/deploy-marketplace/plans-pricing-full.png":::
+1. Follow each page in the wizard, all the way through **Review + Create**. Fill in information for your resource group, your cluster, and any configuration options that the application requires. You can decide to deploy on a new AKS cluster or use an existing cluster.
-- Click *"Create"*.
+ :::image type="content" source="./media/deploy-marketplace/purchase-experience-inline.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a new cluster or using an existing cluster." lightbox="./media/deploy-marketplace/purchase-experience-full.png":::
-## Deploy a Kubernetes offer
+ When the application is deployed, the portal shows **Your deployment is complete**, along with details of the deployment.
-- Follow the form, filling in information for your resource group, cluster, and any configuration options required by the application. You can decide to deploy on a new AKS cluster or use an existing cluster.
+ :::image type="content" source="./media/deploy-marketplace/deployment-inline.png" alt-text="Screenshot of the Azure portal that shows a successful resource deployment to the cluster." lightbox="./media/deploy-marketplace/deployment-full.png":::
- :::image type="content" source="./media/deploy-marketplace/purchase-experience-inline.png" alt-text="Screenshot of the Azure portal form for deploying a new offer. A selector for creating a new or using an existing cluster is shown." lightbox="./media/deploy-marketplace/purchase-experience-full.png":::
+1. Verify the deployment by using the following command to list the extensions that are running on your cluster:
-- After some time, the application will be deployed, as indicated by the Portal screen.
+ ```azurecli-interactive
+ az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+ ```
- :::image type="content" source="./media/deploy-marketplace/deployment-inline.png" alt-text="Screenshot of the Azure portal screen showing a successful resource deployment, indicating the offer has been deployed to the cluster." lightbox="./media/deploy-marketplace/deployment-full.png":::
+## Manage the offer lifecycle
-- You can also verify by listing the extensions running on your cluster:
+For lifecycle management, an Azure Kubernetes offer is represented as a cluster extension for AKS. For more information, seeΓÇ»[Cluster extensions for AKS][cluster-extensions].
- ```azurecli-interactive
- az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
- ```
-
-## Manage offer lifecycle
-
-For lifecycle management, an Azure Kubernetes offer is represented as a cluster extension for Azure Kubernetes service(AKS). For more details, seeΓÇ»[cluster extensions for AKS][cluster-extensions].
-
-Purchasing an offer from the Azure Marketplace creates a new instance of the extension on your AKS cluster. The extension instance can be viewed from the cluster using the following command:
+Purchasing an offer from Azure Marketplace creates a new instance of the extension on your AKS cluster. You can view the extension instance from the cluster by using the following command:
```azurecli-interactive az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
-### Removing an offer
+## Monitor billing and usage information
-A purchased Azure Container offer plan can be deleted by deleting the extension instance on the cluster. For example:
+To monitor billing and usage information for the offer that you deployed:
-```azurecli-interactive
-az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
-```
+1. In the Azure portal, go to the page for your cluster's resource group.
-## Monitor billing and usage information
+1. Select **Cost Management** > **Cost analysis**. Under **Product**, you can see a cost breakdown for the plan that you selected.
+
+ :::image type="content" source="./media/deploy-marketplace/billing-inline.png" alt-text="Screenshot of the Azure portal page for a resource group, with billing information broken down by offer plan." lightbox="./media/deploy-marketplace/billing-full.png":::
+
+## Remove an offer
-To monitor billing and usage information for the offer you've deployed, visit Cost Management > Cost Analysis in your cluster's resource group's page in the Azure portal. You can see a breakdown of cost for the plan you've selected under "Product".
+You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. For example:
+```azurecli-interactive
+az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
-## Next Steps
+## Next steps
- Learn more about [exploring and analyzing costs][billing]. <!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview [cluster-extensions]: ./cluster-extensions.md
-[billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md
+[billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster by using Bi
description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 08/11/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front ```
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 06/28/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front ```
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 04/29/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
Two Kubernetes Services are also created:
app: azure-vote-front ```
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ 1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest: ```console
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 04/29/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front ```
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```azurepowershell-interactive
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 08/17/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front ```
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
description: Learn how to quickly create a Kubernetes cluster, deploy an applica
Previously updated : 04/29/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
spec:
app: sample ```
+For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Title: Create a Windows Server container on an AKS cluster by using PowerShell
description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 04/29/2022 Last updated : 11/01/2022
spec:
app: sample ```
+For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
As mentioned, virtual network peering is one way to access your private cluster.
2. The private DNS zone is linked only to the VNet that the cluster nodes are attached to (3). This means that the private endpoint can only be resolved by hosts in that linked VNet. In scenarios where no custom DNS is configured on the VNet (default), this works without issue as hosts point at 168.63.129.16 for DNS that can resolve records in the private DNS zone because of the link.
-3. In scenarios where the VNet containing your cluster has custom DNS settings (4), cluster deployment fails unless the private DNS zone is linked to the VNet that contains the custom DNS resolvers (5). This link can be created manually after the private zone is created during cluster provisioning or via automation upon detection of creation of the zone using event-based deployment mechanisms (for example, Azure Event Grid and Azure Functions).
+3. In scenarios where the VNet containing your cluster has custom DNS settings (4), cluster deployment fails unless the private DNS zone is linked to the VNet that contains the custom DNS resolvers (5). This link can be created manually after the private zone is created during cluster provisioning or via automation upon detection of creation of the zone using event-based deployment mechanisms (for example, Azure Event Grid and Azure Functions). To avoid cluster failure during initial deployment, the cluster can be deployed with the private DNS zone resource ID. This only works with resource type Microsoft.ContainerService/managedCluster and API version 2022-07-01. Using an older version with an ARM template or Bicep resource definition is not supported.
> [!NOTE] > Conditional Forwarding doesn't support subdomains.
Once the A record is created, link the private DNS zone to the virtual network t
[container-registry-private-link]: ../container-registry/container-registry-private-link.md [virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server [virtual-networks-168.63.129.16]: ../virtual-network/what-is-ip-address-168-63-129-16.md
-[use-custom-domains]: coredns-custom.md#use-custom-domains
+[use-custom-domains]: coredns-custom.md#use-custom-domains
api-management Api Management Howto Create Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md
In API Management, groups are used to manage the visibility of products to devel
API Management has the following immutable system groups:
-* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers.
+* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers. You can't add users to this group.
+
+ > [!NOTE]
+ > You can change the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications sent to developers from your API Management instance.
+ * **Developers** - Authenticated developer portal users fall into this group. Developers are the customers that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API. * **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal of an API Management instance fall into this group. They can be granted certain read-only access, such as the ability to view APIs but not call them.
Once the association is added between the developer and the group, you can view
* Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How create and publish a product in Azure API Management][How create and publish a product in Azure API Management], * In addition to creating and managing groups in the Azure portal, you can create and manage your groups using the API Management REST API [Group](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-group-entity) entity.
-* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that asre used in notifications to developers from your API Management instance.
+* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications to developers from your API Management instance.
[Create a group]: #create-group
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
$context = New-AzApiManagementContext -resourcegroup 'ContosoResourceGroup' -ser
New-AzApiManagementBackend -Context $context -Url 'https://contoso.com/myapi' -Protocol http -SkipCertificateChainValidation $true ```
+You can also disable certificate chain validation by using the [Backend](/rest/api/apimanagement/current-ga/backend) REST API.
+ ## Delete a client certificate To delete a certificate, select it and then select **Delete** from the context menu (**...**).
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
If you use Azure Key Vault to manage a custom domain TLS certificate, make sure
To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. * When you use the Azure portal to import the certificate, all the necessary configuration steps are completed automatically. * When you use command-line tools or management API, these permissions must be granted manually, in two steps:
- 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal Id on that page.
- 1. Give the list and get secrets permissions to this principal Id on the Azure Key Vault containing the certificate.
+ 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal ID on that page.
+ 1. Give the list and get secrets permissions to this principal ID on the Azure Key Vault containing the certificate.
If the certificate is set to `autorenew` and your API Management tier has an SLA (that is, in all tiers except the Developer tier), API Management will pick up the latest version automatically, without downtime to the service.
Choose the steps according to the [domain certificate](#domain-certificate-optio
### CNAME record
-Configure a CNAME record that points from your custom domain name (for example, `api.contoso.com`) to your API Management service hostname (for example, `<apim-service-name>.azure-api.net`). A CNAME record is more stable than an A-record in case the IP address changes. For more information, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](./api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-back-end-services-).
+Configure a CNAME record that points from your custom domain name (for example, `api.contoso.com`) to your API Management service hostname (for example, `<apim-service-name>.azure-api.net`). A CNAME record is more stable than an A-record in case the IP address changes. For more information, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](./api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-backend-services-).
> [!NOTE] > Some domain registrars only allow you to map subdomains when using a CNAME record, such as `www.contoso.com`, and not root names, such as `contoso.com`. For more information on CNAME records, see the documentation provided by your registrar or [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035).
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
An API developer writes an API definition by providing a specification, settings
There are several tools to assist producing the API definition: * The [Azure API Management DevOps Resource Toolkit][4] includes two tools that provide an Azure Resource Manager (ARM) template. The _extractor_ creates an ARM template by extracting an API definition from an API Management service. The _creator_ produces the ARM template from a YAML specification. The DevOps Resource Toolkit supports SOAP, REST, and GraphQL APIs.
-* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST only at this time.
+* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST and GraphQL APIs at this time.
* The [dotnet-apim][6] tool converts a well-formed YAML definition into an ARM template for later deployment. The tool is focused on REST APIs. * [Terraform][7] is an alternative to Azure Resource Manager to configure resources in Azure. You can create a Terraform configuration (together with policies) to implement the API in the same way that an ARM template is created.
Review [Automated API deployments with APIOps][28] in the Azure Architecture Cen
[25]: https://azure.microsoft.com/services/devops/ [26]: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md [27]: https://github.com/Azure/azure-api-style-guide
-[28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops
+[28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops
app-service Configure Vnet Integration Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-routing.md
Title: Configure virtual network integration with application routing.
-description: This how-to article walks you through configuring app routing on a regional virtual network integration.
+ Title: Configure virtual network integration with application and configuration routing.
+description: This how-to article walks you through configuring routing on a regional virtual network integration.
Last updated 10/20/2021
# Manage Azure App Service virtual network integration routing
-When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your Azure virtual network (VNet). This article describes how to configure application routing.
+Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. See the [overview section](./overview-vnet-integration.md#routes) for more details.
## Prerequisites
-Your app is already integrated using the regional VNet integration feature.
+Your app is already integrated using the regional virtual network integration feature.
-## Configure in the Azure portal
+## Configure application routing
+
+Application routing defines what traffic is routed from your app and into the virtual network. We recommend that you use the **Route All** site setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting.
+
+### Configure in the Azure portal
Follow these steps to disable **Route All** in your app through the portal.
Follow these steps to disable **Route All** in your app through the portal.
1. Select **Yes** to confirm.
-## Configure with the Azure CLI
+### Configure with the Azure CLI
+
+You can also configure **Route All** by using the Azure CLI.
+
+```azurecli-interactive
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetRouteAllEnabled [true|false]
+```
+
+## Configure configuration routing
+
+When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
+
+### Container image pull
-You can also configure **Route All** by using the Azure CLI. The minimum az version required is 2.27.0.
+Routing container image pull over virtual network integration can be configured using the Azure CLI.
```azurecli-interactive
-az webapp config set --resource-group <group-name> --name <app-name> --vnet-route-all-enabled [true|false]
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetImagePullEnabled [true|false]
```
-## Configure with Azure PowerShell
+We recommend that you use the site property to enable routing image pull traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_PULL_IMAGE_OVER_VNET` app setting with the value `true` can still be used, and you can enable routing through the virtual network with either setting.
-```azurepowershell
-# Parameters
-$siteName = '<app-name>'
-$resourceGroupName = '<group-name>'
+### Content storage
-# Configure VNet Integration
-$webApp = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName $resourceGroupName -ResourceName $siteName
-Set-AzResource -ResourceId ($webApp.Id + "/config/web") -Properties @{ vnetRouteAllEnabled = $true } -Force
+Routing content storage over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
+
+```azurecli-interactive
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentStorageEnabled [true|false]
```
+We recommend that you use the site property to enable content storage traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting.
+ ## Next steps - [Enable virtual network integration](./configure-vnet-integration-enable.md)
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
The setting is also available for configuration through Azure portal at the App
:::image type="content" source="./media/configure-network-settings/configure-allow-incoming-ftp-connections.png" alt-text="Screenshot from Azure portal of how to configure your App Service Environment to allow incoming ftp connections.":::
-In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access).
+In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access) and that the [necessary ports](./networking.md#ports-and-network-restrictions) are unblocked.
## Remote debugging access
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
If you don't have an App Service Environment, see [How to Create an App Service
The custom domain suffix defines a root domain that can be used by the App Service Environment. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. For ILB App Service Environments, the default root domain is *appserviceenvironment.net*. However, since an ILB App Service Environment is internal to a customer's virtual network, customers can use a root domain in addition to the default one that makes sense for use within a company's internal virtual network. For example, a hypothetical Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to only be resolvable and accessible within Contoso's virtual network. An app in this virtual network could be reached by accessing *APP-NAME.internal-contoso.com*.
-The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *APP-NAME.scm.ASE-NAME.appserviceenvironment.net*.
- The custom domain suffix is for the App Service Environment. This feature is different from a custom domain binding on an App Service. For more information on custom domain bindings, see [Map an existing custom DNS name to Azure App Service](../app-service-web-tutorial-custom-domain.md).
+If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain.
+ ## Prerequisites - ILB variation of App Service Environment v3.
The certificate for custom domain suffix must be stored in an Azure Key Vault. A
:::image type="content" source="./media/custom-domain-suffix/key-vault-networking.png" alt-text="Screenshot of a sample networking page for key vault to allow custom domain suffix feature.":::
-Your certificate must be a wildcard certificate for the selected custom domain name. For example, *contoso.com* would need a certificate covering **.contoso.com*.
+Your certificate must be a wildcard certificate for the selected custom domain name. For example, *internal-contoso.com* would need a certificate covering **.internal-contoso.com*. If the certificate used custom domain suffix contains a Subject Alternate Name (SAN) entry for scm, for example **.scm.internal-contoso.com*, the scm site will also available using the custom domain suffix.
::: zone pivot="experience-azp"
If you want to use your own DNS server, add the following records:
1. Create a zone for your custom domain. 1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment. 1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
+1. Optionally create a zone for scm sub-domain with a * A record that points to the inbound IP address used by your App Service Environment
To configure DNS in Azure DNS private zones:
To configure DNS in Azure DNS private zones:
:::image type="content" source="./media/custom-domain-suffix/custom-domain-suffix-dns-configuration.png" alt-text="Screenshot of a sample DNS configuration for your custom domain suffix."::: 1. Link your Azure DNS private zone to your App Service Environment's virtual network. :::image type="content" source="./media/custom-domain-suffix/private-dns-zone-vnet-link.png" alt-text="Screenshot of a sample virtual network link for private DNS zone.":::
+1. Optionally create an A record in that zone that points *.scm to the inbound IP address used by your App Service Environment.
For more information on configuring DNS for your domain, see [Use an App Service Environment](./using.md#dns-configuration).
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
The normal app access ports inbound are as follows:
|Visual Studio remote debugging|4022, 4024, 4026| |Web Deploy service|8172|
+> [!NOTE]
+> For FTP access, even if you want to disallow standard FTP on port 21, you still need to allow traffic from the LoadBalancer to the App Service Environment subnet range, as this is used for internal health ping traffic for the ftp service specifically.
+ ## Network routing You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies.
app-service Overview Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md
For any rule, regardless of type, you can add http header filtering. Http header
* **X-Forwarded-For**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-For) for identifying the originating IP address of a client connecting through a proxy server. Accepts valid CIDR values. * **X-Forwarded-Host**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) for identifying the original host requested by the client. Accepts any string up to 64 characters in length.
-* **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the reverse proxy instance. Azure Front Door will send a guid identifying the instance, but it can also be used by third party proxies to identify the specific instance. Accepts any string up to 64 characters in length.
-* **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the health probe of the reverse proxy. Azure Front Door will send "1" to uniquely identify a health probe request. The header can also be used by third party proxies to identify health probes. Accepts any string up to 64 characters in length.
+* **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the reverse proxy instance. Azure Front Door will send a guid identifying the instance, but it can also be used by third party proxies to identify the specific instance. Accepts any string up to 64 characters in length.
+* **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the health probe of the reverse proxy. Azure Front Door will send "1" to uniquely identify a health probe request. The header can also be used by third party proxies to identify health probes. Accepts any string up to 64 characters in length.
Some use cases for http header filtering are: * Restrict access to traffic from proxy servers forwarding the host name
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
When regional virtual network integration is enabled, your app makes outbound ca
When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network, and outbound traffic to the internet will go through the same channels as normal.
-The feature supports only one virtual interface per worker. One virtual interface per worker means one regional virtual network integration per App Service plan. All the apps in the same App Service plan can only use the same virtual network integration to a specific subnet. If you need an app to connect to another virtual network or another subnet in the same virtual network, you need to create another App Service plan. The virtual interface used isn't a resource that customers have direct access to.
+The feature supports two virtual interface per worker. Two virtual interfaces per worker means two regional virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to additional virtual networks or additional subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used isn't a resource that customers have direct access to.
Because of the nature of how this technology operates, the traffic that's used with virtual network integration doesn't show up in Azure Network Watcher or NSG flow logs.
Application routing applies to traffic that is sent from your app after it has b
* Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet. * When **Route All** is enabled, the source address for your outbound public traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address will then originate from this service.
-Learn [how to configure application routing](./configure-vnet-integration-routing.md).
-
-We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting.
+Learn [how to configure application routing](./configure-vnet-integration-routing.md#configure-application-routing).
> [!NOTE] > Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
When you're using virtual network integration, you can configure how parts of th
Bringing your own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
-To route content storage traffic through the virtual network integration, you need to add an app setting named `WEBSITE_CONTENTOVERVNET` with the value `1`. In addition to adding the app setting, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
+To route content storage traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content storage routing](./configure-vnet-integration-routing.md#content-storage).
+
+In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
##### Container image pull
-When using custom containers for Linux, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must add an app setting named `WEBSITE_PULL_IMAGE_OVER_VNET` with the value `true`.
+When using custom containers, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure image pull routing](./configure-vnet-integration-routing.md#container-image-pull).
##### App settings using Key Vault references App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration. > [!NOTE]
-> * Windows containers don't support pulling custom container images over virtual network integration.
> * Backup/restore to private storage accounts is currently not supported. > * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account.
There are some limitations with using regional virtual network integration:
* The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled. * The integration subnet can be used by only one App Service plan. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network.
-* You can have only one regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network.
+* You can have two regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration.
* You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration. ## Gateway-required virtual network integration
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
Title: Environment variables and app settings reference description: Describes the commonly used environment variables, and which ones can be modified with app settings. Previously updated : 02/15/2022 Last updated : 11/01/2022 # Environment variables and app settings in Azure App Service
For more information on deployment slots, see [Set up staging environments in Az
| Setting name| Description | Example | |-|-|-|
-|`WEBSITE_SLOT_NAME`| Read-only. Name of the current deployment slot. The name of the production slot is `Production`. ||
|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `0` on *all slots*. || |`WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS`| Designates certain settings as [sticky or not swappable by default](deploy-staging-slots.md#which-settings-are-swapped). Default is `true`. Set this setting to `false` or `0` for *all deployment slots* to make them swappable instead. There's no fine-grain control for specific setting types. || |`WEBSITE_SWAP_WARMUP_PING_PATH`| Path to ping to warm up the target slot in a swap, beginning with a slash. The default is `/`, which pings the root path over HTTP. | `/statuscheck` |
app-service Reference Dangling Subdomain Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md
Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](/az
Azure App Service provides [Name Reservation Service](#how-app-service-prevents-subdomain-takeovers) and [domain verification tokens](#how-you-can-prevent-subdomain-takeovers) to prevent subdomain takeovers. ## How App Service prevents subdomain takeovers
-Upon deletion of an App Service app, the corresponding DNS is reserved. During the reservation period, reuse of the DNS is forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS.
+Upon deletion of an App Service app or App Service Environment (ASE), the corresponding DNS is reserved. During the reservation period, reuse of the DNS is forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS.
-After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean-up any associations/pointers to said DNS or reclaim the DNS in Azure. The DNS name being reserved can be derived by appending 'azurewebsites.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require more configuration.
+After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean-up any associations/pointers to said DNS or reclaim the DNS in Azure. The DNS name being reserved for web apps can be derived by appending 'azurewebsites.net' and for ASE can be derived by appending 'appserviceenvironment.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require more configuration.
#### Example scenario
attestation Azure Tpm Vbs Attestation Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/azure-tpm-vbs-attestation-usage.md
+
+ Title: Azure TPM VBS attestation usage
+description: Learn about how to apply TPM and VBS attestation
++++ Last updated : 09/05/2022++++
+# Using TPM/VBS attestation
+
+Attestation can be integrated into various applications and services, catering to different use cases. Azure Attestation service, which acts the remote attestation service can be used for desired purposes by updating the attestation policy. The policy engine works as processor, which takes the incoming payload as evidence and performs the validations as authored in the policy. This architecture simplifies the workflow and enables the service owner to purpose build solutions for the varied platforms and use cases.The workflow remains the same as described in [Azure attestation workflow](workflow.md). The attestation policy needs to be crafted as per the validations required.
+
+Attesting a platform has its own challenges with its varied components of boot and setup, one needs to rely on a hardware root-of-trust anchor which can be used to verify the first steps of the boot and extend that trust upwards into every layer on your system. A hardware TPM provides such an anchor for a remote attestation solution. Azure Attestation provides a highly scalable measured boot and runtime integrity measurement attestation solution with a revocation framework to give you full control over platform attestation.
+
+## Attestation steps
+
+Attestation Setup has two setups. One pertaining to the service setup and one pertaining to the client setup.
++
+Detailed information about the workflow is described in [Azure attestation workflow](workflow.md).
+
+### Service endpoint setup:
+This is the first step for any attestation to be performed. Setting up an endpoint, this can be performed either via code or using the Azure portal.
+
+Here's how you can set up an attestation endpoint using Portal
+
+1 Prerequisite: Access to the Microsoft Azure Active Directory(Azure AD) tenant and subscription under which you want to create the attestation endpoint.
+Learn more about setting up an [Azure AD tenant](../active-directory/develop/quickstart-create-new-tenant.md).
+
+2 Create an endpoint under the desired resource group, with the desired name.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5azcU]
+
+3 Add Attestation Contributor Role to the Identity who will be responsible to update the attestation policy.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRj]
+
+4 Configure the endpoint with the required policy.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRk]
+
+Sample policies can be found in the [policy section](tpm-attestation-sample-policies.md).
+
+> [!NOTE]
+> TPM endpoints are designed to be provisioned without a default attestation policy.
++
+### Client setup:
+A client to communicate with the attestation service endpoint needs to ensure it's following the protocol as described in the [protocol documentation](virtualization-based-security-protocol.md). Use the [Attestation Client NuGet](https://www.nuget.org/packages/Microsoft.Attestation.Client) to ease the integration.
+
+1 Prerequisite: An Azure AD identity is needed to access the TPM endpoint.
+Learn more [Azure AD identity tokens](../active-directory/develop/v2-overview.md).
+
+2 Add Attestation Reader Role to the identity that will be need for authentication against the endpoint. Azure i
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRi]
++
+## Execute the attestation workflow:
+Using the [Client](https://github.com/microsoft/Attestation-Client-Samples) to trigger an attestation flow. A successful attestation will result in an attestation report (encoded JWT token). Parsing the JWT token, the contents of the report can be easily validated against expected outcome.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5azcT]
++
+Here's a sample of the contents of the attestation report.
+
+Using the Open ID [metadata endpoint](/rest/api/attestation/metadata-configuration/get?tabs=HTTP) contains properties, which describe the attestation service.The signing keys describe the keys, which will be used to sign tokens generated by the attestation service. All tokens emitted by the attestation service will be signed by one of the certificates listed in the attestation signing keys.
+
+## Next steps
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
+- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
+- [Learn more about policy](policy-reference.md)
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md
Title: Examples of an Azure Attestation policy
+ Title: Examples of an Azure SGX Attestation policy
description: Examples of Azure Attestation policy.
eyJhbGciOiJub25lIn0.eyJBdHRlc3RhdGlvblBvbGljeSI6ICJkbVZ5YzJsdmJqMGdNUzR3TzJGMWRH
eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZd0RRWUpLb1pJaHZjTkFRRUxCUUF3SWpFZ01CNEdBMVVFQXhNWFFYUjBaWE4wWVhScGIyNURaWEowYVdacFkyRjBaVEF3SGhjTk1qQXhNVEl6TVRneU1EVXpXaGNOTWpFeE1USXpNVGd5TURVeldqQWlNU0F3SGdZRFZRUURFeGRCZEhSbGMzUmhkR2x2YmtObGNuUnBabWxqWVhSbE1EQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUpyRWVNSlo3UE01VUJFbThoaUNLRGh6YVA2Y2xYdkhmd0RIUXJ5L3V0L3lHMUFuMGJ3MVU2blNvUEVtY2FyMEc1WmYxTUR4alZOdEF5QjZJWThKLzhaQUd4eFFnVVZsd1dHVmtFelpGWEJVQTdpN1B0NURWQTRWNlx1MDAyQkJnanhTZTBCWVpGYmhOcU5zdHhraUNybjYwVTYwQUU1WFx1MDAyQkE1M1JvZjFUUkNyTXNLbDRQVDRQeXAzUUtNVVlDaW9GU3d6TkFQaU8vTy9cdTAwMkJIcWJIMXprU0taUXh6bm5WUGVyYUFyMXNNWkptRHlyUU8vUFlMTHByMXFxSUY2SmJsbjZEenIzcG5uMXk0Wi9OTzJpdFBxMk5Nalx1MDAyQnE2N1FDblNXOC9xYlpuV3ZTNXh2S1F6QVR5VXFaOG1PSnNtSThUU05rLzBMMlBpeS9NQnlpeDdmMTYxQ2tjRm1LU3kwQ0F3RUFBYU1RTUE0d0RBWURWUjBUQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZ1ZKVWRCaXRud3ZNdDdvci9UMlo4dEtCUUZsejFVcVVSRlRUTTBBcjY2YWx2Y2l4VWJZR3gxVHlTSk5pbm9XSUJROU9QamdMa1dQMkVRRUtvUnhxN1NidGxqNWE1RUQ2VjRyOHRsejRISjY0N3MyM2V0blJFa2o5dE9Gb3ZNZjhOdFNVeDNGTnBhRUdabDJMUlZHd3dcdTAwMkJsVThQd0gzL2IzUmVCZHRhQTdrZmFWNVx1MDAyQml4ZWRjZFN5S1F1VkFUbXZNSTcxM1A4VlBsNk1XbXNNSnRrVjNYVi9ZTUVzUVx1MDAyQkdZcU1yN2tLWGwxM3lldUVmVTJWVkVRc1ovMXRnb29iZVZLaVFcdTAwMkJUcWIwdTJOZHNcdTAwMkJLamRIdmFNYngyUjh6TDNZdTdpR0pRZnd1aU1tdUxSQlJwSUFxTWxRRktLNmRYOXF6Nk9iT01zUjlpczZ6UDZDdmxGcEV6bzVGUT09Il19.eyJBdHRlc3RhdGlvblBvbGljeSI6ImRtVnljMmx2YmoweExqQTdZWFYwYUc5eWFYcGhkR2x2Ym5KMWJHVnpJSHRqT2x0MGVYQmxQVDBpSkdsekxXUmxZblZuWjJGaWJHVWlYU0FtSmlCYmRtRnNkV1U5UFhSeWRXVmRJRDAtSUdSbGJua29LVHM5UGlCd1pYSnRhWFFvS1R0OU8ybHpjM1ZoYm1ObGNuVnNaWE1nZXlBZ0lDQmpPbHQwZVhCbFBUMGlKR2x6TFdSbFluVm5aMkZpYkdVaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKT2IzUkVaV0oxWjJkaFlteGxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdJQ0FnSUdNNlczUjVjR1U5UFNJa2FYTXRaR1ZpZFdkbllXSnNaU0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbWx6TFdSbFluVm5aMkZpYkdVaUxDQjJZV3gxWlQxakxuWmhiSFZsS1RzZ0lDQWdZenBiZEhsd1pUMDlJaVJ6WjNndGJYSnphV2R1WlhJaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKelozZ3RiWEp6YVdkdVpYSWlMQ0IyWVd4MVpUMWpMblpoYkhWbEtUc2dJQ0FnWXpwYmRIbHdaVDA5SWlSelozZ3RiWEpsYm1Oc1lYWmxJbDBnUFQ0Z2FYTnpkV1VvZEhsd1pUMGljMmQ0TFcxeVpXNWpiR0YyWlNJc0lIWmhiSFZsUFdNdWRtRnNkV1VwT3lBZ0lDQmpPbHQwZVhCbFBUMGlKSEJ5YjJSMVkzUXRhV1FpWFNBOVBpQnBjM04xWlNoMGVYQmxQU0p3Y205a2RXTjBMV2xrSWl3Z2RtRnNkV1U5WXk1MllXeDFaU2s3SUNBZ0lHTTZXM1I1Y0dVOVBTSWtjM1p1SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzNadUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN0lDQWdJR002VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.c0l-xqGDFQ8_kCiQ0_vvmDQYG_u544CYmoiucPNxd9MU8ZXT69UD59UgSuya2yl241NoVXA_0LaMEB2re0JnTbPD_dliJn96HnIOqnxXxRh7rKbu65ECUOMWPXbyKQMZ0I3Wjhgt_XyyhfEiQGfJfGzA95-wm6yWqrmW7dMI7JkczG9ideztnr0bsw5NRsIWBXOjVy7Bg66qooTnODS_OqeQ4iaNsN-xjMElHABUxXhpBt2htbhemDU1X41o8clQgG84aEHCgkE07pR-7IL_Fn2gWuPVC66yxAp00W1ib2L-96q78D9J52HPdeDCSFio2RL7r5lOtz8YkQnjacb6xA ```
-## Sample policy for TPM using Policy version 1.0
-
-```
-version=1.0;
-
-authorizationrules {
- => permit();
-};
-
-issuancerules
-{
-[type=="aikValidated", value==true]&&
-[type=="secureBootEnabled", value==true] &&
-[type=="bootDebuggingDisabled", value==true] &&
-[type=="vbsEnabled", value==true] &&
-[type=="notWinPE", value==true] &&
-[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true);
-};
-```
-
-A simple TPM attestation policy that can be used to verify minimal aspects of the boot.
-
-## Sample policy for TPM using Policy version 1.2
-
-```
-version=1.2;
-
-configurationrules{
- => issueproperty(type="required_pcr_mask", value=131070);
- => issueproperty(type="require_valid_aik_cert", value=false);
-};
-
-authorizationrules {
-c:[type == "tpmVersion", issuer=="AttestationService", value==2] => permit();
-};
-
-issuancerules{
-
-c:[type == "aikValidated", issuer=="AttestationService"] =>issue(type="aikValidated", value=c.value);
-
-// SecureBoot enabled
-c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
-c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "SecureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
-![type=="SecureBootEnabled", issuer=="AttestationPolicy"] => issue(type="SecureBootEnabled", value=false);
-
-// Retrieve bool properties Code integrity
-c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
-c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));
-c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));
-![type=="CodeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=false);
-
-// Bitlocker Boot Status, The first non zero measurement or zero.
-c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
-c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="BitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK | @[? Value != `0`].Value | @[0]")));
-[type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=true);
-![type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=false);
-
-// Elam Driver (windows defender) Loaded
-c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`")));
-[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=true);
-![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=false);
-
-};
-
-```
-
-The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot.
- ## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)
attestation Policy Version 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md
Some of the key rules you can use to generate claims are listed here.
|Feature |Description |Policy rule | |--|-|--|
-| Secure boot |Device boots using only software that's trusted by the OEM, which is Microsoft. | `c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); \![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` |
-| Code integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.| `// Retrieve bool propertiesc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));\![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` |
-|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));\![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);Nonzero means enabled.` |
-| Early Launch Antimalware (ELAM) | ELAM protects against loading unsigned or malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` |
-| Boot debugging |Allows the user to connect to a boot debugger. Can be used to bypass secure boot and other boot protections. | `// Boot debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` |
-| Kernel debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less virtualization-based security [VBS] protected resources). | `// Kernel Debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` |
-|Data Execution Prevention (DEP) policy | DEP policy is a set of hardware and software technologies that perform extra checks on memory to help prevent malicious code from running on a system. | `// DEP Policyc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));\![type=="depPolicy"] => issue(type="depPolicy", value=0);` |
-| Test and flight signing | Enables the user to run test-signed code. | `// Test Signing< c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING"))); c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` |
-| Virtual Secure Mode/VBS | VBS uses the Windows hypervisor to create this virtual secure mode that's used to protect vital system and operating system resources and credentials. | `// VSM enabled c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` |
-| Hypervisor-protected code integrity (HVCI) | HVCI is a feature that validates the integrity of a system file each time it's loaded into memory.| `// HVCIc:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` |
-| Input-output memory management unit (IOMMU) | IOMMU translates virtual to physical memory addresses for Direct memory access-capable device peripherals. IOMMU protects sensitive memory regions. | `// IOMMUc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` |
-| PCR value evaluation | PCRs contain measurements of components that are made during the boot. These measurements can be used to verify the components against golden or known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` |
-| Boot Manager version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN// Find the first EV_SEPARATOR in PCR 12, 13, Or 14c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
-| Safe mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe modec:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` |
-| WinPE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that's used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows setup. | `// Win PEc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` |
-| Code integrity (CI) policy | Hash of CI policy that's controlling the security of the boot environment. | `// CI Policyc :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`|
-| Secure Boot Configuration Policy Hash (SBCPHash) | SBCPHash is the fingerprint of the Custom SBCP that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policyc:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` |
-| Boot application subversion | The version of the Boot Manager that's running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` "); // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
-| Boot revision list | Boot revision list used to direct the device to an enterprise honeypot to further monitor the device's activities. | `// Boot Rev List Info c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` |
+| Secure boot |Device boots using only software that's trusted by the OEM. | ` c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));`<br>` => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));`<br>` ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` |
+| Code integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.| `// Retrieve bool properties `<br>` c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));`<br>`c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` |
+|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));`<br>`![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);` |
+| Early Launch Antimalware (ELAM) | ELAM protects against loading unsigned or malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));`<br>`![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` |
+| Boot debugging |Allows the user to connect to a boot debugger. Can be used to bypass secure boot and other boot protections. | `// Boot debugging`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));`<br>`c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` |
+| Kernel debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less virtualization-based security [VBS] protected resources). | `// Kernel Debugging`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));`<br>`c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` |
+|Data Execution Prevention (DEP) policy | DEP policy is a set of hardware and software technologies that perform extra checks on memory to help prevent malicious code from running on a system. | `// DEP Policy`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));`<br>`![type=="depPolicy"] => issue(type="depPolicy", value=0);` |
+| Test and flight signing | Enables the user to run test-signed code. | `// Test Signing `<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>` c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING")));`<br>` c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false));`<br>` ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);`<br>`//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));`<br>`c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` |
+| Virtual Secure Mode/VBS | VBS uses the Windows hypervisor to create this virtual secure mode that's used to protect vital system and operating system resources and credentials. | `// VSM enabled `<br>` c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));`<br>`c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);`<br>`c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` |
+| Hypervisor-protected code integrity (HVCI) | HVCI is a feature that validates the integrity of a system file each time it's loaded into memory.| `// HVCI`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));`<br>`c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));`<br>`![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` |
+| Input-output memory management unit (IOMMU) | IOMMU translates virtual to physical memory addresses for Direct memory access-capable device peripherals. IOMMU protects sensitive memory regions. | `// IOMMU`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));`<br>`c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` |
+| PCR value evaluation | PCRs contain measurements of components that are made during the boot. These measurements can be used to verify the components against golden or known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.`<br>`c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` |
+| Boot Manager version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN`<br>`// Find the first EV_SEPARATOR in PCR 12, 13, Or 14`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));`<br>`c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));`<br>`[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");`<br>`// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));`<br>`c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));`<br>`c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));`<br>`c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Safe mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe mode`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));`<br>`c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));`<br>`![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` |
+| WinPE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that's used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows setup. | `// Win PE`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));`<br>`c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));`<br>`![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` |
+| Code integrity (CI) policy | Hash of CI policy that's controlling the security of the boot environment. | `// CI Policy`<br>`c :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`|
+| Secure Boot Configuration Policy Hash (SBCPHash) | SBCPHash is the fingerprint of the Custom SBCP that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policy`<br>`c:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` |
+| System Guard (DRTM Validation and SMM Levels) | Ensure System Guard has been validated during boot and corresponding System Management Mode Level | ` // Extract the DRTM state auth event. `<br>`// The rule attempts to find the valid DRTM state auth event by applying following conditions:`<br>`// 1. There is only one DRTM state auth event in the events log`<br>`// 2. The EVENT_DRTM_STATE_AUTH.SignatureValid field in the DRTM state auth event is set to true`<br><br>` c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid != `null`] \| length(@) == `1` && @[0] \| @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}"));`<br><br>` // Check if Signature is valid in extracted state auth events`<br>`c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid")));`<br>`![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false);`<br><br>`// Get the sequence number of the DRTM state auth event.`<br>`// The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event.`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq"));`<br><br>` // Create query for SMM event`<br>`// The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && EventSeq <`", c.value), "`].ProcessedData.EVENT_DRTM_SMM \| length(@) == `1` && @[0] \| @.Value"));`<br><br>`// Extract SMM value`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&`<br>` c1:[type=="smmQuery", issuer=="AttestationPolicy"] &&`<br>` c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));`<br>` ` |
+| Boot application subversion | The version of the Boot Manager that's running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));`<br>`c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));`<br>`[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");`<br>` // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));`<br>`c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));`<br>`c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));`<br>`// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));`<br>`c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));`<br>`c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));`<br>`// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));`<br>`c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));`<br>`c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Boot revision list | Boot revision list used to direct the device to an enterprise honeypot to further monitor the device's activities. | `// Boot Rev List Info `<br>`c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` |
## Sample policies for TPM attestation using version 1.2
attestation Tpm Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-concepts.md
authorizationrules {
issuancerules {
-[type=="aikValidated", value==true]&&
+[type=="aikValidated", value==true] &&
[type=="secureBootEnabled", value==true] => issue(type="PlatformAttested", value=true);
-}
+};
```
The following example takes advantage of policy version 1.2 to verify details ab
``` version=1.2;
-authorizationrules {
+authorizationrules {
=> permit(); }; - issuancerules { // Verify if secure boot is enabled c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
-c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+c:[type=="efiConfigVariables", issuer=="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false); // HVCI
-c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 19)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY | @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value"))); c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1)); ![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);
-// System Guard Secure Launch
- // Validating unwanted(malicious.sys) driver is not loaded
-c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 13 || PcrIndex == 19 || PcrIndex == 20)].ProcessedData.EVENT_TRUSTBOUNDARY"));
-c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == true && (equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\malicious.sys') || equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\wd\malicious.sys'))] | @ != null")));
+c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == true && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\malicious.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\malicious.sys'))] | @ != null ")));
![type=="MaliciousDriverLoaded", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=false); }; ```
+## Extending the protection from malicious boot attacks via Integrity Measurement Architecture(IMA) on Linux
+
+Linux systems follow a similar boot process to Windows, and with TPM attestation the protection profile can be extended to beyond boot into the kernel as well using Integrity Measurement Architecture(IMA). IMA subsystem was designed to detect if files have been accidentally or maliciously altered, both remotely and locally, it maintains a runtime measurement list and, if anchored in a hardware Trusted Platform Module(TPM), an aggregate integrity value over this list provides the benefit of resiliency from software attacks. Recent enhancements in the IMA subsystem also allow for non file based attributes to be measured and attested remotely. Azure attestation supports non file based measurements to be attested remotely to provide a holistic view of system integrity.
+
+Enabling IMA with the following ima-policy will enable measurement of non file attributes while still enabling local file integrity attestation.
+
+Using the following Attestation policy, you can now validate the secureboot, kernel signature, kernel version, kernel cmdline passed in by grub and other key security attributes supported by IMA.
+
+```
+version = 1.2;
+
+configurationrules
+{
+};
+
+authorizationrules
+{
+ [type == "aikValidated", value==true]
+ => permit();
+};
+
+issuancerules {
+ // Retrieve all EFI Boot variables with event = 'EV_EFI_VARIABLE_BOOT'
+ c:[type == "events", issuer=="AttestationService"] => add(type ="efiBootVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_BOOT']"));
+
+ // Retrieve all EFI Driver Config variables with event = 'EV_EFI_VARIABLE_DRIVER_CONFIG'
+ c:[type == "events", issuer=="AttestationService"] => add(type ="efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG']"));
+
+ // Grab all IMA events
+ c:[type=="events", issuer=="AttestationService"] => add(type="imaMeasurementEvents", value=JmesPath(c.value, "Events[?EventTypeString == 'IMA_MEASUREMENT_EVENT']"));
+
+ // Look for "Boot Order" from EFI Boot Data
+ c:[type == "efiBootVariables", issuer=="AttestationPolicy"] => add(type = "bootOrderFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'BootOrder'] | length(@) == `1` && @[0].PcrIndex == `1` && @[0].ProcessedData.VariableData"));
+ c:[type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=JsonToClaimValue(c.value));
+ ![type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=0);
+
+ // Look for "Secure Boot" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData == 'AQ'")));
+ ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);
+
+ // Look for "Platform Key" from EFI Boot Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "platformKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'PK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=JsonToClaimValue(c.value));
+ ![type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=0);
+
+ // Look for "Key Exchange Key" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyExchangeKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'KEK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=JsonToClaimValue(c.value));
+ ![type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=0);
+
+ // Look for "Key Database" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyDatabaseFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'db'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=JsonToClaimValue(c.value));
+ ![type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=0);
+
+ // Look for "Forbidden Signatures" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "forbiddenSignaturesFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'dbx'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=JsonToClaimValue(c.value));
+ ![type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=0);
+
+ // Look for "Kernel Version" in IMA Measurement events
+ c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="kernelVersionsFound", value=JmesPath(c.value, "[].ProcessedData.KernelVersion"));
+ c:[type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=JsonToClaimValue(c.value));
+ ![type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=0);
+
+ // Look for "Built-In Trusted Keys" in IMA Measurement events
+ c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="builtintrustedkeysFound", value=JmesPath(c.value, "[? ProcessedData.Keyring == '.builtin_trusted_keys'].ProcessedData.CertificateSubject"));
+ c:[type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=JsonToClaimValue(c.value));
+ ![type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=0);
+};
+
+```
+
+Note: Support for non-file based measurements are only available from linux kernel version: 5.15
+
+## TPM Key attestation support
+
+Numerous applications rely on foundational credential management of keys and certs for protections against credential theft, and one of main ways of ensuring the credential security is the reliance of key storage providers that provide additional security from malware and attacks. Windows implements various cryptographic providers that can be either software or hardware based.
+
+The two most important ones are:
+
+* Microsoft Software Key Storage Provider: Standard provider, which stores keys software based and supports CNG (Crypto-Next Generation)
+
+* Microsoft Platform Crypto Provider: Hardware based which stores keys on a TPM (trusted platform module) and supports CNG as well
+
+Whenever a Storage provider is used, itΓÇÖs usually to create a pub/priv key pair that is chained to a root of trust. At creation more properties can also be used to enable certain aspects of the key storage, exportability, etc. Key attestation in this context, is the technical ability to prove to a replying party that a private key was generated inside, and is managed inside, and in a not exportable form. Such attestation clubbed with other information can help protect from credential theft and replay type of attack.
+
+TPMs also provide the capability ability to attest that keys are resident in a TPM, enabling higher security assurance, backed up by non-exportability, anti-hammering, and isolation of keys. A common use case is for applications that issue digital signature certificate for subscriber keys, verifying that the subscribers private signature key is generated and managed in an approved TPM.
+One can easily attest to the fact the keys are resident in a valid TPM with appropriate Nonexportability flags using a policy as below.
+
+```
+version=1.2;
+
+authorizationrules
+{
+ => permit();
+};
+
+issuancerules
+{
+ // Key Attest Policy
+ // -- Validating key types
+ c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyType", value=JsonToClaimValue(JmesPath(c.value, "jwk.kty")));
+ c:[type=="x-ms-tpm-other-keys", issuer=="AttestationService"] => add(type="otherKeysTypes", value=JsonToClaimValue(JmesPath(c.value, "[*].jwk.kty")));
+ c:[type=="requestKeyType", issuer=="AttestationPolicy", value=="RSA"] => issue(type="requestKeyType", value="RSA");
+ c:[type=="otherKeysTypes", issuer=="AttestationPolicy", value=="RSA"] => issue(type="otherKeysTypes", value="RSA");
+
+ // -- Validating tpm_quote attributes
+ c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyQuote", value=JmesPath(c.value, "info.tpm_quote"));
+ c:[type=="requestKeyQuote", issuer=="AttestationPolicy"] => add(type="requestKeyQuoteHashAlg", value=JsonToClaimValue(JmesPath(c.value, "hash_alg")));
+ c:[type=="requestKeyQuoteHashAlg", issuer=="AttestationPolicy", value=="sha-256"] => issue(type="requestKeyQuoteHashAlg", value="sha-256");
+
+ // -- Validating tpm_certify attributes
+ c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyCertify", value=JmesPath(c.value, "info.tpm_certify"));
+ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyNameAlg", value=JsonToClaimValue(JmesPath(c.value, "name_alg")));
+ c:[type=="requestKeyCertifyNameAlg", issuer=="AttestationPolicy", value==11] => issue(type="requestKeyCertifyNameAlg", value=11);
+
+ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "obj_attr")));
+ c:[type=="requestKeyCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="requestKeyCertifyObjAttr", value=50);
+
+ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyAuthPolicy", value=JsonToClaimValue(JmesPath(c.value, "auth_policy")));
+ c:[type=="requestKeyCertifyAuthPolicy", issuer=="AttestationPolicy", value=="AQIDBA"] => issue(type="requestKeyCertifyAuthPolicy", value="AQIDBA");
+
+ c:[type=="x-ms-tpm-other-keys", issuer=="AttestationService"] => add(type="otherKeysCertify", value=JmesPath(c.value, "[*].info.tpm_certify"));
+ c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyNameAlgs", value=JsonToClaimValue(JmesPath(c.value, "[*].name_alg")));
+ c:[type=="otherKeysCertifyNameAlgs", issuer=="AttestationPolicy", value==11] => issue(type="otherKeysCertifyNameAlgs", value=11);
+
+ c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "[*].obj_attr")));
+ c:[type=="otherKeysCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="otherKeysCertifyObjAttr", value=50);
+
+ c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyAuthPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].auth_policy")));
+ c:[type=="otherKeysCertifyAuthPolicy", issuer=="AttestationPolicy", value=="AQIDBA"] => issue(type="otherKeysCertifyAuthPolicy", value="AQIDBA");
+};
+
+```
+ ## Next steps
+- [Try out TPM attestation](azure-tpm-vbs-attestation-usage.md)
- [Device Health Attestation on Windows and interacting with Azure Attestation](/windows/client-management/mdm/healthattestation-csp#windows-11-device-health-attestation) - [Learn more about claim rule grammar](claim-rule-grammar.md) - [Attestation policy claim rule functions](claim-rule-functions.md)
attestation Tpm Attestation Sample Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-sample-policies.md
+
+ Title: Examples of an Azure TPM Attestation policy
+description: Examples of Azure Attestation policy for TPM endpoint.
++++ Last updated : 10/12/2022++++
+# Examples of an attestation policy for TPM endpoint
+
+Attestation policy is used to process the attestation evidence and determine whether Azure Attestation will issue an attestation token. Attestation token generation can be controlled with custom policies. Below are some examples of an attestation policy.
+
+## Sample policy for TPM using Policy version 1.0
+
+```
+version=1.0;
+
+authorizationrules {
+ => permit();
+};
+
+issuancerules
+{
+[type=="aikValidated", value==true]&&
+[type=="secureBootEnabled", value==true] &&
+[type=="bootDebuggingDisabled", value==true] &&
+[type=="vbsEnabled", value==true] &&
+[type=="notWinPE", value==true] &&
+[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true);
+};
+```
+
+A simple TPM attestation policy that can be used to verify minimal aspects of the boot.
+
+## Sample policy for TPM using Policy version 1.2
+
+The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot.
+
+```
+version=1.2;
+
+configurationrules{
+};
+
+authorizationrules {
+ => permit();
+};
+
+issuancerules{
+
+c:[type == "aikValidated", issuer=="AttestationService"] =>issue(type="aikValidated", value=c.value);
+
+// SecureBoot enabled
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
+c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "SecureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+![type=="SecureBootEnabled", issuer=="AttestationPolicy"] => issue(type="SecureBootEnabled", value=false);
+
+// Retrieve bool properties Code integrity
+c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));
+c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));
+![type=="CodeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=false);
+
+// Bitlocker Boot Status, The first non zero measurement or zero.
+c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="BitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK | @[? Value != `0`].Value | @[0]")));
+[type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=true);
+![type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=false);
+
+// Elam Driver (windows defender) Loaded
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`")));
+[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=true);
+![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=false);
+
+};
+
+```
+### Attestation policy to authorize only those TPMs that match known PCR hashes.
+
+```
+version=1.2;
+
+authorizationrules
+{
+ c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr0Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA256 | @[0] =='4c833b1c361fceffd8dc0f81eec76081b71e1a0eb2193caed0b6e1c7735ec33e' ")));
+ c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr1Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `1`].Digests.SHA256 | @[0] =='8c25e3be6ad6f5bd33c9ae40d5d5461e88c1a7366df0c9ee5c7e5ff40d3d1d0e' ")));
+ c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr2Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `2`].Digests.SHA256 | @[0] =='3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969' ")));
+ c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr3Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `3`].Digests.SHA256 | @[0] =='3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969' ")));
+
+ [type=="pcr0Validated", value==true] &&
+ [type=="pcr1Validated", value==true] &&
+ [type=="pcr2Validated", value==true] &&
+ [type=="pcr3Validated", value==true] => permit();
+};
+
+issuancerules
+{
+};
+```
+
+### Attestation policy to validate System Guard is enabled as expected and has been validated for its state.
+
+```
+version = 1.2;
+
+authorizationrules
+{
+ => permit();
+};
+
+issuancerules
+{
+ // Extract the DRTM state auth event
+ // The rule attempts to find the valid DRTM state auth event by applying following conditions:
+ // 1. There is only one DRTM state auth event in the events log
+ // 2. The EVENT_DRTM_STATE_AUTH.SignatureValid field in the DRTM state auth event is set to true
+ c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid != `null`] | length(@) == `1` && @[0] | @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}"));
+
+ // Check if Signature is valid in extracted state auth events
+ c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid")));
+ ![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false);
+
+ // Get the sequence number of the DRTM state auth event.
+ // The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event.
+ [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&
+ c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq"));
+
+ // Create query for SMM event
+ // The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log
+ [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&
+ c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && EventSeq < `", c.value), "`].ProcessedData.EVENT_DRTM_SMM | length(@) == `1` && @[0] | @.Value"));
+
+ // Extract SMM value
+ [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&
+ c1:[type=="smmQuery", issuer=="AttestationPolicy"] &&
+ c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));
+};
+```
++
+### Attestation policy to validate VBS enclave for its validity and identity.
+
+```
+version=1.2;
+
+authorizationrules {
+ [type=="vsmReportPresent", value==true] &&
+ [type=="enclaveAuthorId", value=="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"] &&
+ [type=="enclaveImageId", value=="AQEAAAAAAAAAAAAAAAAAAA"] &&
+ [type=="enclaveSvn", value>=0] => permit();
+};
+
+issuancerules
+{
+};
+```
+
+### Attestation policy to issue all incoming claims produced by the service.
+
+```
+version = 1.2;
+
+configurationrules
+{
+};
+
+authorizationrules
+{
+ => permit();
+};
+
+issuancerules
+{
+ c:[type=="bootEvents", issuer=="AttestationService"] => issue(type="outputBootEvents", value=c.value);
+ c:[type=="events", issuer=="AttestationService"] => issue(type="outputEvents", value=c.value);
+};
+```
+
+### Attestation policy to produce some critical security evaluation claims for Windows.
+
+```
+version=1.2;
+
+authorizationrules {
+ => permit();
+};
+
+issuancerules{
+
+// SecureBoot enabled
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);
+
+// Boot debugging
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING"))); c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);
+
+// Virtualization Based Security enabled
+c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vbsEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_VSM_REQUIRED"))); c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vbsEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT"))); c:[type=="vbsEnabledSet", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=ContainsOnlyValue(c.value, true)); ![type=="vbsEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=false); c:[type=="vbsEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);
+
+// System Guard and SMM value
+c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == '20' && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid !=null] | length(@) == '1' && @[0] | @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}"));
+
+// Check if Signature is valid in extracted state auth events
+c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid")));
+![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false);
+
+// Get the sequence number of the DRTM state auth event.
+// The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event.
+[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq"));
+
+// Create query for SMM event
+// The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log
+[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == 20 && EventSeq <", c.value), "].ProcessedData.EVENT_DRTM_SMM | length(@) == 1 && @[0] | @.Value"));
+
+// Extract SMM value
+[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&
+c1:[type=="smmQuery", issuer=="AttestationPolicy"] &&
+c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));
+
+};
+```
+
+### Attestation policy to validate boot related firmware and early boot driver signers on linux
+
+```
+version = 1.2;
+
+configurationrules
+{
+};
+
+authorizationrules
+{
+ [type == "aikValidated", value==true]
+ => permit();
+};
+
+issuancerules {
+ // Retrieve all EFI Boot variables with event = 'EV_EFI_VARIABLE_BOOT'
+ c:[type == "events", issuer=="AttestationService"] => add(type ="efiBootVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_BOOT']"));
+
+ // Retrieve all EFI Driver Config variables with event = 'EV_EFI_VARIABLE_DRIVER_CONFIG'
+ c:[type == "events", issuer=="AttestationService"] => add(type ="efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG']"));
+
+ // Grab all IMA events
+ c:[type=="events", issuer=="AttestationService"] => add(type="imaMeasurementEvents", value=JmesPath(c.value, "Events[?EventTypeString == 'IMA_MEASUREMENT_EVENT']"));
+
+ // Look for "Boot Order" from EFI Boot Data
+ c:[type == "efiBootVariables", issuer=="AttestationPolicy"] => add(type = "bootOrderFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'BootOrder'] | length(@) == `1` && @[0].PcrIndex == `1` && @[0].ProcessedData.VariableData"));
+ c:[type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=JsonToClaimValue(c.value));
+ ![type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=0);
+
+ // Look for "Secure Boot" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData == 'AQ'")));
+ ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);
+
+ // Look for "Platform Key" from EFI Boot Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "platformKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'PK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=JsonToClaimValue(c.value));
+ ![type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=0);
+
+ // Look for "Key Exchange Key" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyExchangeKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'KEK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=JsonToClaimValue(c.value));
+ ![type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=0);
+
+ // Look for "Key Database" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyDatabaseFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'db'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=JsonToClaimValue(c.value));
+ ![type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=0);
+
+ // Look for "Forbidden Signatures" from EFI Driver Configuration Data
+ c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "forbiddenSignaturesFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'dbx'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData"));
+ c:[type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=JsonToClaimValue(c.value));
+ ![type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=0);
+
+ // Look for "Kernel Version" in IMA Measurement events
+ c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="kernelVersionsFound", value=JmesPath(c.value, "[].ProcessedData.KernelVersion"));
+ c:[type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=JsonToClaimValue(c.value));
+ ![type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=0);
+
+ // Look for "Built-In Trusted Keys" in IMA Measurement events
+ c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="builtintrustedkeysFound", value=JmesPath(c.value, "[? ProcessedData.Keyring == '.builtin_trusted_keys'].ProcessedData.CertificateSubject"));
+ c:[type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=JsonToClaimValue(c.value));
+ ![type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=0);
+};
+
+```
+### Attestation policy to issue the list of drivers loaded during boot.
+
+```
+version = 1.2;
+
+configurationrules
+{
+};
+
+authorizationrules
+{
+ => permit();
+};
+
+issuancerules {
+
+c:[type=="events", issuer=="AttestationService"] => issue(type="alldriverloads", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' ].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_FILEPATH"));
+
+c:[type=="events", issuer=="AttestationService"] => issue(type="DriverLoadPolicy", value=JmesPath(c.value, "events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == '13')].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRIVER_LOAD_POLICY.String"));
+
+};
+
+```
+
+### Attestation policy for Key attestation, validating keys and properties of the key.
+
+```
+version=1.2;
+
+authorizationrules
+{
+ // Key Attest Policy
+ // -- Validating key types
+ c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyType", value=JsonToClaimValue(JmesPath(c.value, "jwk.kty")));
+ c:[type=="requestKeyType", issuer=="AttestationPolicy", value=="RSA"] => issue(type="requestKeyType", value="RSA");
+
+ // -- Validating tpm_certify attributes
+ c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyCertify", value=JmesPath(c.value, "info.tpm_certify"));
+ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "obj_attr")));
+ c:[type=="requestKeyCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="requestKeyCertifyObjAttrVerified", value=true);
+
+ c:[type=="requestKeyCertifyObjAttrVerified", issuer=="AttestationPolicy" , value==true] => permit();
+
+};
+
+issuancerules
+{
+
+};
+```
attestation Virtualization Based Security Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/virtualization-based-security-protocol.md
Title: Virtualization-based Security (VBS) protocol for Azure Attestation
+ Title: Virtualization-based security (VBS) protocol for Azure Attestation
description: VBS attestation protocol
VBS enclaves require a TPM to provide the measurement to validate the security f
## Protocol messages
+The protocol has 2 messages exchanges:
+* Init Message
+* Request Message
+ ### Init message
+Message to establish context for the request message.
#### Direction
Azure Attestation -> Client
**service_context** (BASE64URL(OCTETS)): Opaque context created by the service.
-### Request message
+## Request Message
+Payload containing the data that is to be attested by the attestation service.
+
+Note: Support for IMA measurement logs and Keys has been added to Request message, and can be found in the Request Message V2 section
+
+## Request message v1
#### Direction
BASE64URL(JWS Payload) || '.' ||
BASE64URL(JWS Signature)
-##### JWS Protected Header
+##### JWS protected header
``` {
BASE64URL(JWS Signature)
} ```
-##### JWS Payload
+##### JWS payload
JWS payload can be of type basic or VBS. Basic is used when attestation evidence does not include VBS data.
Azure Attestation -> Client
**report** (JWT): The attestation report in JSON Web Token (JWT) format (RFC 7519).
+## Request message v2
++
+```
+{
+ "request": "<JWS>"
+}
+```
+
+**request** (JWS): Request consists of a JSON Web Signature (JWS) structure. The JWS Protected Header and JWS Payload are shown below. As in any JWS structure, the final value consists of:
+BASE64URL(UTF8(JWS Protected Header)) || '.' ||
+BASE64URL(JWS Payload) || '.' ||
+BASE64URL(JWS Signature)
+
+##### JWS protected header
+```
+{
+ "alg": "PS256",
+ "typ": "attReqV2"
+ // no "kid" parameter as the key specified by request_key MUST sign this JWS to prove possession.
+}
+```
+
+#### JWS payload
+
+JWS payload can be of type basic or vsm. Basic is used when attestation evidence does not include VSM data.
+
+Basic example:
+
+```
+{
+ "att_type": "basic",
+ "att_data": {
+ "rp_id": "<URL>",
+ "rp_data": "<BASE64URL(RPCUSTOMDATA)>",
+ "challenge": "<BASE64URL(CHALLENGE)>",
+ "tpm_att_data": {
+ "current_attestation": {
+ "logs": [
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG1)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG2)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG3)>"
+ }
+ ],
+ "aik_cert": "<BASE64URL(AIKCERTIFICATE)>",
+ // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517).
+ "aik_pub": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "pcrs": [
+ {
+ "algorithm": 4, // TPM_ALG_SHA1
+ "values": [
+ {
+ "index": 0,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 5,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ },
+ {
+ "algorithm": 11, // TPM_ALG_SHA256
+ "values": [
+ {
+ "index": 2,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 1,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ }
+ ],
+ "quote": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ },
+ "boot_attestation": {
+ "logs": [
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(BOOT_LOG1)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(BOOT_LOG2)>"
+ }
+ ],
+ "aik_cert": "<BASE64URL(AIKCERTIFICATE)>",
+ // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517).
+ "aik_pub": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "pcrs": [
+ {
+ "algorithm": 4, // TPM_ALG_SHA1
+ "values": [
+ {
+ "index": 0,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 5,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ },
+ {
+ "algorithm": 11, // TPM_ALG_SHA256
+ "values": [
+ {
+ "index": 2,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 1,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ }
+ ],
+ "quote": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ }
+ },
+ "request_key": {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_quote": {
+ "hash_alg": "sha-256"
+ }
+ }
+ },
+ "other_keys": [
+ {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_certify": {
+ "public": "<BASE64URL(TPMT_PUBLIC)>",
+ "certification": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ }
+ }
+ },
+ {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ }
+ }
+ ],
+ "custom_claims": [
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "value_type": "<value_type>"
+ },
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "value_type": "<value_type>"
+ }
+ ],
+ "service_context": "<BASE64URL(SERVICECONTEXT)>"
+ }
+}
+```
+
+TPM + VBS enclave example:
+```
+{
+ "att_type": "vbs",
+ "att_data": {
+ "report_signed": {
+ "rp_id": "<URL>",
+ "rp_data": "<BASE64URL(RPCUSTOMDATA)>",
+ "challenge": "<BASE64URL(CHALLENGE)>",
+ "tpm_att_data": {
+ "current_attestation": {
+ "logs": [
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG1)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG2)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(CURRENT_LOG3)>"
+ }
+ ],
+ "aik_cert": "<BASE64URL(AIKCERTIFICATE)>",
+ // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517).
+ "aik_pub": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "pcrs": [
+ {
+ "algorithm": 4, // TPM_ALG_SHA1
+ "values": [
+ {
+ "index": 0,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 5,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ },
+ {
+ "algorithm": 11, // TPM_ALG_SHA256
+ "values": [
+ {
+ "index": 2,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 1,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ }
+ ],
+ "quote": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ },
+ "boot_attestation": {
+ "logs": [
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(BOOT_LOG1)>"
+ },
+ {
+ "type": "TCG",
+ "log": "<BASE64URL(BOOT_LOG2)>"
+ }
+ ],
+ "aik_cert": "<BASE64URL(AIKCERTIFICATE)>",
+ // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517).
+ "aik_pub": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "pcrs": [
+ {
+ "algorithm": 4, // TPM_ALG_SHA1
+ "values": [
+ {
+ "index": 0,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 5,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ },
+ {
+ "algorithm": 11, // TPM_ALG_SHA256
+ "values": [
+ {
+ "index": 2,
+ "digest": "<BASE64URL(DIGEST)>"
+ },
+ {
+ "index": 1,
+ "digest": "<BASE64URL(DIGEST)>"
+ }
+ ]
+ }
+ ],
+ "quote": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ }
+ },
+ "request_key": {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_quote": {
+ "hash_alg": "sha-256"
+ }
+ }
+ },
+ "other_keys": [
+ {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_certify": {
+ "public": "<BASE64URL(TPMT_PUBLIC)>",
+ "certification": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ }
+ }
+ },
+ {
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ }
+ }
+ ],
+ "custom_claims": [
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "value_type": "<value_type>"
+ },
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "value_type": "<value_type>"
+ }
+ ],
+ "service_context": "<BASE64URL(SERVICECONTEXT)>"
+ },
+ "vsm_report": {
+ "enclave": {
+ "report": "<BASE64URL(REPORT)>"
+ }
+ }
+ }
+}
+```
+
+**rp_id** (StringOrURI): Relying party identifier. Used by the service in the computation of the machine ID claim.
+
+**rp_data** (BASE64URL(OCTETS)): Opaque data passed by the relying party. This is normally used by the relying party as a nonce to guarantee freshness of the report.
+
+**challenge** (BASE64URL(OCTETS)): Random value issued by the service.
+
+- ***current_attestation*** (Object): Contains logs and TPM quote for the current state of the system (either boot or resume). The nonce received from the service must be passed to the TPM2_Quote command in the 'qualifyingData' parameter.
+
+- ***boot_attestation*** (Object): This is optional and contains logs and the TPM quote saved before the system hibernated and resumed. boot_attestation info must be associated with the same cold boot cycle (i.e. the system was only hibernated and resumed between them).
+
+- ***logs*** (Array(Object)): Array of logs. Each element of the array contains a log and the array must be in the order used for measurements.
+++++
+- ***aik_cert*** (BASE64URL(OCTETS)): The X.509 certificate representing the AIK.
+
+- ***aik_pub*** (JWK): The public part of the AIK represented as a JSON Web Key (JWK) object (RFC 7517).
+
+- ***pcrs*** (Array(Object)): Contains the set quoted. Each element of the array represents a PCR bank and the array must be in the order used to create the quote. A PCR bank is defined by its algorithm and its values (only the values quoted should be in the list).
+++++++++++++
+**vsm_report** (VSM Report Object): The VSM attestation report. See the VSM REPORT OBJECT section.
+
+**request_key** (Key object): Key used to sign the request. If a TPM is present (request contains TPM quote), request_key must either be bound to the TPM via quote or be resident in the TPM (see KEY OBJECT).
+
+**other_keys** (Array(Key object)): Array of keys to be sent to the service. Maximum of 2 keys.
+
+**custom_claims** (Array(Object)): Array of custom enclave claims sent to the service that can be evaluated by the policy.
+
+- ***name*** (String): Name of the claim. This name will be appended to a url determined by the Attestation Service (to avoid conflicts) and the concatenated string becomes the type of the claim that can be used in the policy.
+
+- ***value*** (String): Value of the claim.
+
+- ***value_type*** (String): Data type of the claimΓÇÖs value.
+
+**service_context** (BASE64URL(OCTETS)): Opaque, encrypted context created by the service which includes, among others, the challenge and an expiration time for that challenge.
+
+## Key object
+
+**jwk** (Object): The public part of the key represented as a JSON Web Key (JWK) object (RFC 7517).
+
+**info** (Object): Extra information about the key.
+
+No extra information:(Info object can be empty or missing from request)
+
+ΓÇó Key bound to the TPM via quote:
+- ***tpm_quote*** (Object): Data for the TPM quote binding method.
+- ***hash_alg*** (String): The algorithm used to create the hash passed to the TPM2_Quote command in the 'qualifyingData' parameter. The hash is computed by HASH[UTF8(jwk) || 0x00 || <OCTETS(service challenge)>]. Note: UTF8(jwk) must be the exact string that will be sent on the wire as the service will compute the hash using the exact string received in the request without modifications.
+
+>> Note: This binding method cannot be used for keys in the other_keys array.
+
+ΓÇó Key certified to be resident in the TPM:
+
+- ***tpm_certify*** (Object): Data for the TPM certification binding method.
+ΓÇ£publicΓÇ¥ (BASE64URL(OCTETS)): TPMT_PUBLIC structure representing the public area of the key in the TPM.
+
+- ***certification*** (BASE64URL(OCTETS)): TPMS_ATTEST returned by the TPM2_Certify command. The challenge received from the service must be passed to the TPM2_Certify command in the 'qualifyingData' parameter. The AIK provided in the request must be used to certify the key.
+
+- ***signature*** (BASE64URL(OCTETS)): TPMT_SIGNATURE returned by the TPM2_Certify command. The challenge received from the service must be passed to the TPM2_Certify command in the 'qualifyingData' parameter. The AIK provided in the request must be used to certify the key.
+
+>> Note: When this binding method is used for the request_key, the 'qualifyingData' parameter value passed to the TPM2_Quote command is simply the challenge received from the service.
+
+Examples:
+
+Key not bound to the TPM:
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ }
+}
+```
+
+Key bound to the TPM via quote (either resident in a VBS enclave or not):
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_quote":
+ "hash_alg": "sha-256"
+ }
+ }
+}
+```
+
+Key certified to be resident in the TPM:
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_certify": {
+ "public": "<BASE64URL(TPMT_PUBLIC)>",
+ "certification": "<BASE64URL(TPMS_ATTEST)>",
+ "signature": "<BASE64URL(TPMT_SIGNATURE)>"
+ }
+ }
+}
+```
+
+## Policy key object
+
+The policy key object is the version of the key object used as input claims in the policy. It is processed by the service in order to make it more readable and easier to evaluate by policy rules.
+
+ΓÇó Key not bound to the TPM:
+Same as the respective key object.
+Example:
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ }
+}
+```
+ΓÇó Key bound to the TPM via quote (either resident in a VBS enclave or not):
+Same as the respective key object.
+Example:
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_quote":
+ "hash_alg": "sha-256"
+ }
+ }
+}
+```
+
+ΓÇó Key certified to be resident in the TPM:
+
+***jwk*** (Object): Same as the respective key object.
+***info.tpm_certify*** (Object):
+- ***name_alg*** (Integer): UINT16 value representing a hash algorithm defined by the TPM_ALG_ID constants.
+- ***obj_attr*** (Integer): UINT32 value representing the attributes of the key object defined by TPMA_OBJECT
+- ***auth_policy*** (BASE64URL(OCTETS)): Optional policy for using this key object.
+
+Example:
+```
+{
+ "jwk": {
+ "kty": "RSA",
+ "n": "<Base64urlUInt(MODULUS)>",
+ "e": "<Base64urlUInt(EXPONENT)>"
+ },
+ "info": {
+ "tpm_certify": {
+ "name_alg": 11, // 0xB (TPM_ALG_SHA256)
+ "obj_attr": 50, // 0x32 (fixedTPM | fixedParent | sensitiveDataOrigin)
+ "auth_policy": "<BASE64URL(AUTH_POLICY)>"
+ }
+ }
+}
+```
+
+## VBS report object
+
+### Enclave attestation:
+***enclave*** (Object): Data for VSM enclave attestation.
+- ***report*** (BASE64URL(OCTETS)): The VSM enclave attestation report as returned by function EnclaveGetAttestationReport. The EnclaveData parameter must be the SHA-512 hash of the value of report_signed (including the opening and closing braces). The hash function input is UTF8(report_signed).
+
+Examples:
+
+Enclave attestation:
+```
+{
+ "enclave": {
+ "report": "<BASE64URL(REPORT)>"
+ }
+}
+```
+
+## Report message
+
+Direction
+Attestation Service -> Client
+
+Payload
+```
+{
+ "report": "<JWT>"
+}
+```
+
+***report*** (JWT): The attestation report in JSON Web Token (JWT) format (RFC 7519).
++ ## Next steps - [Azure Attestation workflow](workflow.md)
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
In the event when a zone is down, there's no action required by you to recover f
## Supported regions with availability zones See [Regions and Availability Zones in Azure](../availability-zones/az-overview.md) for the Azure regions that have availability zones.
-Automation accounts currently support the following regions in preview:
+Automation accounts currently support the following regions:
- China North 3 - Qatar Central
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
## October 2022
+### Public preview of PowerShell 7.2 and Python 3.10
+
+Azure Automation now supports runbooks in latest Runtime versions - PowerShell 7.2 and Python 3.10 in public preview. This enables creation and execution of runbooks for orchestration of management tasks. These new runtimes are currently supported only for Cloud jobs in five regions - West Central US, East US, South Africa North, North Europe, Australia, and Southeast. [Learn more](automation-runbook-types.md).
+ ### Guidance for Disaster Recovery of Azure Automation account
-Azure Automation now supports you to build your own disaster recovery strategy to handle a region-wide or zone-wide failure. [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery).
+Build your own disaster recovery strategy to handle a region-wide or zone-wide failure [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery).
## September 2022
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 10/12/2022 Last updated : 11/01/2022 description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters"
Optional parameters:
To delete a custom location, use the following command: ```azurecli
-az customlocation delete -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds>
+az customlocation delete -n <customLocationName> -g <resourceGroupName>
```
+Required parameters:
+
+| Parameter name | Description |
+|-||
+| `--name, --n` | Name of the custom location |
+| `--resource-group, --g` | Resource group of the custom location |
+ ## Troubleshooting If custom location creation fails with the error 'Unknown proxy error occurred', it may be due to network policies configured to disallow pod-to-pod internal communication.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
> For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md). * Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type.
-* Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
-
- ```console
- az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager
- ```
### Common to both cluster types
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
allowing existing management tools to have a greater impact on Azure Arc-enabled
SSH access to Arc-enabled servers provides the following key benefits: - No public IP address or open SSH ports required - Access to Windows and Linux machines
+ - Ability to log in as a local user or an [Azure user (Linux only)](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
- Support for other OpenSSH based tooling with config file support ## Prerequisites
SSH access to Arc-enabled servers is currently supported in the following region
- Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04 ## Getting started
-### Register the HybridConnectivity resource provider
+
+### Install local command line tool
+This functionality is currently packaged in an Azure CLI extension and an Azure PowerShell module.
+#### [Install Azure CLI extension](#tab/azure-cli)
+
+```az extension add --name ssh```
+ > [!NOTE]
-> This is a one-time operation that needs to be performed on each subscription.
+> The Azure CLI extension version must be greater than 1.1.0.
-Check if the HybridConnectivity resource provider (RP) has been registered:
+#### [Install Azure PowerShell module](#tab/azure-powershell)
-```az provider show -n Microsoft.HybridConnectivity```
+```Install-Module -Name AzPreview -Scope CurrentUser -Repository PSGallery -Force```
-If the RP has not been registered, run the following:
+### Enable functionality on your Arc-enabled server
+In order to use the SSH connect feature, you must enable connections on the hybrid agent.
-```az provider register -n Microsoft.HybridConnectivity```
+> [!NOTE]
+> The following actions must be completed in an elevated terminal session.
-This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered.
+View your current incoming connections:
-### Install az CLI extension
-This functionality is currently package in an az CLI extension.
-In order to install this extension, run:
+```azcmagent config list```
-```az extension add --name ssh```
+If you have existing ports, you'll need to include them in the following command.
-If you already have the extension installed, it can be updated by running:
+To add access to SSH connections, run the following:
-```az extension update --name ssh```
+```azcmagent config set incomingconnections.ports 22<,other open ports,...>```
+
+If you're using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command.
> [!NOTE]
-> The Azure CLI extension version must be greater than 1.1.0.
+> The following steps will not need to be run for most users.
+
+### Register the HybridConnectivity resource provider
+> [!NOTE]
+> This is a one-time operation that needs to be performed on each subscription.
+
+Check if the HybridConnectivity resource provider (RP) has been registered:
+
+```az provider show -n Microsoft.HybridConnectivity```
+
+If the RP hasn't been registered, run the following:
+
+```az provider register -n Microsoft.HybridConnectivity```
+
+This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered.
### Create default connectivity endpoint > [!NOTE]
Validate endpoint creation:
az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview ```
-### Enable functionality on your Arc-enabled server
-In order to use the SSH connect feature, you must enable connections on the hybrid agent.
-
-> [!NOTE]
-> The following actions must be completed in an elevated terminal session.
-
-View your current incoming connections:
-
-```azcmagent config list```
-
-If you have existing ports, you will need to include them in the following command.
-
-To add access to SSH connections, run the following:
-
-```azcmagent config set incomingconnections.ports 22<,other open ports,...>```
-
-> [!NOTE]
-> If you are using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command.
- ## Examples
-To view examples of using the ```az ssh arc``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
+To view examples, view the Az CLI documentation page for [az ssh](/cli/azure/ssh) or the Azure PowerShell documentation page for [Az.Ssh](/powershell/module/az.ssh).
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnecti
If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors.
+## WEBSITE\_SLOT\_NAME
+
+Read-only. Name of the current deployment slot. The name of the production slot is `Production`.
+
+|Key|Sample value|
+|||
+|WEBSITE_SLOT_NAME|`Production`|
+ ## WEBSITE\_TIME\_ZONE Allows you to set the timezone for your function app.
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
+
+ Title: How to create a dataset using a GeoJson package
+description: Learn how to create a dataset using a GeoJson package embedding the module's JavaScript libraries.
++ Last updated : 10/31/2021+++++
+# Create a dataset using a GeoJson package (Preview)
+
+Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0][Facility Ontology], which can then be used to create a [dataset][dataset-concept].
+
+> [!NOTE]
+> This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps](#next-steps).
+
+## Prerequisites
+
+- Basic understanding of [Creator for indoor maps](creator-indoor-maps.md).
+- Basic understanding of [Facility Ontology 2.0][Facility Ontology].
+- [Azure Maps account][Azure Maps account].
+- [Azure Maps Creator resource][Creator resource].
+- [Subscription key][Subscription key].
+- Zip package containing all required GeoJSON files. If you don't have GeoJSON
+ files, you can download the [Contoso building sample][Contoso building sample].
+
+>[!IMPORTANT]
+>
+> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).
+> - In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+
+## Create dataset using the GeoJSON package
+
+For more information on the GeoJSON package, see the [Geojson zip package requirements](#geojson-zip-package-requirements) section.
+
+### Upload the GeoJSON package
+
+Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the Drawing package to Azure Maps Creator account.
+
+The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md).
+
+To upload the GeoJSON package:
+
+1. Execute the following HTTP POST request that uses the [Data Upload API](/rest/api/maps/data-v2/upload):
+
+ ```http
+ https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=zip&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ ```
+
+ 1. Set `Content-Type` in the **Header** to `application/zip`.
+
+1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the upload, which is explained in the next section.
+
+### Check the GeoJSON package upload status
+
+To check the status of the GeoJSON package and retrieve its unique identifier (`udid`):
+
+1. Execute the following HTTP GET request that uses the status URL you copied as the last step in the previous section of this article. The request should look like the following URL:
+
+```http
+https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+```
+
+1. Copy the value of the `Resource-Location` key in the response header, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the GeoJSON package resource.
+
+### Create a dataset
+<!--
+A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
+-->
+A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new create dataset API. The create dataset API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
+
+> [!IMPORTANT]
+> This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted Drawing package.
+
+To create a dataset:
+
+1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section):
+
+<!--1. Enter the following URL to the [Dataset service][Dataset Create 2022-09-01-preview]. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section):-->
+
+ ```http
+ https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ ```
+
+1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the dataset creation process and to get the `datasetId`, which is required to create a tileset.
+
+### Check the dataset creation status
+
+To check the status of the dataset creation process and retrieve the `datasetId`:
+
+1. Enter the status URL you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL:
+
+ ```http
+ https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ ```
+
+1. In the Header of the HTTP response, copy the value of the unique identifier contained in the `Resource-Location` key.
+
+ > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2022-09-01-preview`
+
+See [Next steps](#next-steps) for links to articles to help you complete your indoor map.
+
+## Add data to an existing dataset
+
+<!--
+Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
+-->
+
+Data can be added to an existing dataset by providing the `datasetId` parameter to the create dataset API along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
+
+One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future.
+
+### Add to dataset created from a GeoJSON source
+
+If your original dataset was created from a GoeJSON source and you wish to add another facility created from a drawing package, you can append it to your existing dataset by referencing its `conversionId`, as demonstrated by this HTTP POST request:
+
+```http
+https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId}
+```
+
+| Identifier | Description |
+|--|-|
+| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a Drawing package][conversion]. |
+| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). |
+
+<!--For more information, see [][].-->
+
+## Geojson zip package requirements
+
+The GeoJSON zip package consists of one or more [RFC 7946][RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension.
+
+Each feature class file must match its definition in the [Facility ontology 2.0][Facility ontology] and each feature must have a globally unique identifier.
+
+Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) and underscore (_) characters.
+
+> [!TIP]
+> If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio][Visual Studio]). Guidgen.exe never produces the same number twice, no matter how many times it is run or how many different machines it runs on.
+
+### Facility ontology 2.0 validations in the Dataset
+
+[Facility ontology][Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks as well as geometry and attribute validations. These validations are described in more detail below.
+
+- The maximum number of features that can be imported into a dataset at a time is 150,000.
+- The facility area can be between 4 and 4,000 Sq Km.
+- The top level element is [facility][facility], which defines each building in the file *facility.geojson*.
+- Each facility has one or more levels, which are defined in the file *levels.goejson*.
+ - Each level must be inside the facility.
+- Each [level][level] contain [units][unit], [structures][structure], [verticalPenetrations][verticalPenetration] and [openings][opening]. All of the items defined in the level must be fully contained within the Level geometry.
+ - `unit` can consist of an array of items such as hallways, offices and courtyards, which are defined by [area][areaElement], [line][lineElement] or [point][pointElement] elements. Units are defined in the file *unit.goejson*.
+ - All `unit` elements must be fully contained within their level and intersect with their children.
+ - `structure` defines physical, non-overlapping areas that can't be navigated through, such as a wall. Structures are defined in the file *structure.goejson*.
+ - `verticalPenetration` represents a method of navigating vertically between levels, such as stairs and elevators and are defined in the file *verticalPenetration.geojson*.
+ - verticalPenetrations can't intersect with other verticalPenetrations on the same level.
+ - `openings` define traversable boundaries between two units, or a `unit` and `verticalPenetration` and are defined in the file *opening.geojson*.
+ - Openings can't intersect with other openings on the same level.
+ - Every `opening` must be associated with at least one `verticalPenetration` or `unit`.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a tileset](tutorial-creator-indoor-maps.md#create-a-tileset)
+
+> [!div class="nextstepaction"]
+> [Query datasets with WFS API](tutorial-creator-wfs.md)
+
+> [!div class="nextstepaction"]
+> [Create a feature stateset](tutorial-creator-feature-stateset.md)
+
+[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
+[unit]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit
+[structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
+[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level
+[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility
+[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
+[opening]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening
+[areaElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement
+[lineElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement
+[pointElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement
+
+[conversion]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Creator resource]: how-to-manage-creator.md
+[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2
+[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
+[dataset-concept]: creator-indoor-maps.md#datasets
+<!--[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create-->
+[Visual Studio]: https://visualstudio.microsoft.com/downloads/
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
In the next tutorials in the Creator series you'll learn to:
> * Create a feature stateset that can be used to set the states of features in your dataset. > * Update the state of a given map feature.
+> [!TIP]
+> You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)](how-to-dataset-geojson.md).
+ ## Prerequisites 1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
The Log Analytics agent in Azure Monitor can also be used to collect monitoring
The key differences to consider are: -- Azure Diagnostics extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.-- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only), and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).-- The Log Analytics agent is required for [solutions](../monitor-reference.md#insights-and-curated-visualizations), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
+- Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.
+- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).
+- The Log Analytics agent is required for retired [solutions](../insights/solutions.md), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
## Costs
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
Use the Log Analytics agent if you need to:
* Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitor their processes and dependencies on other resources and external processes. * Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). * Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines.
-* Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application.
+* Use different [solutions](../insights/solutions.md) to monitor a particular service or application.
Limitations of the Log Analytics agent:
azure-monitor Itsmc Connections Cherwell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-cherwell.md
- Title: Connect Cherwell with IT Service Management Connector
-description: This article provides information about how to Cherwell with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
- Previously updated : 2/23/2022---
-# Connect Cherwell with IT Service Management Connector
-
-This article provides information about how to configure the connection between your Cherwell instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items.
-
-> [!NOTE]
-> As of 1-Oct-2020 Cherwell ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
-> Existing ITSM connections will be supported.
-
-The following sections provide details about how to connect your Cherwell product to ITSMC in Azure.
-
-## Prerequisites
-
-Ensure the following prerequisites are met:
--- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](./itsmc-definition.md#install-it-service-management-connector).-- Client ID generated. More information: [Generate client ID for Cherwell](#generate-client-id-for-cherwell).-- User role: Administrator.-
-## Connection procedure
-
-Use the following procedure to create a Cherwell connection:
-
-1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)**
-
-2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**.
- ![New connection](/azure/azure-monitor/alerts/media/itsmc-connections-scsm/add-new-itsm-connection.png)
-
-3. At the top of the right pane, click **Add**.
-
-4. Provide the information as described in the following table, and click **OK** to create the connection.
-
-> [!NOTE]
-> All these parameters are mandatory.
-
-| **Field** | **Description** |
-| | |
-| **Connection Name** | Type a name for the Cherwell instance that you want to connect to ITSMC. You use this name later when you configure work items in this ITSM/ view detailed log analytics. |
-| **Partner type** | Select **Cherwell.** |
-| **Username** | Type the Cherwell user name that can connect to ITSMC. |
-| **Password** | Type the password associated with this user name. **Note:** User name and password are used for generating authentication tokens only, and are not stored anywhere within the ITSMC service.|
-| **Server URL** | Type the URL of your Cherwell instance that you want to connect to ITSMC. |
-| **Client ID** | Type the client ID for authenticating this connection, which you generated in your Cherwell instance. |
-| **Data Sync Scope** | Select the Cherwell work items that you want to sync through ITSMC. These work items are imported into log analytics. **Options:** Incidents, Change Requests. |
-| **Sync Data** | Type the number of past days that you want the data from. **Maximum limit**: 120 days. |
-| **Create new configuration item in ITSM solution** | Select this option if you want to create the configuration items in the ITSM product. When selected, ITSMC creates the affected CIs as configuration items (in case of non-existing CIs) in the supported ITSM system. **Default**: disabled. |
-
-![Cherwell connection](media/itsmc-connections-cherwell/itsm-connections-cherwell-latest.png)
-
-**When successfully connected, and synced**:
--- Selected work items from this Cherwell instance are imported into Azure **Log Analytics.** You can view the summary of these work items on the IT Service Management Connector tile.--- You can create incidents from Log Analytics alerts or from log records, or from Azure alerts in this Cherwell instance.-
-Learn more: [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
-
-### Generate client ID for Cherwell
-
-To generate the client ID/key for Cherwell, use the following procedure:
-
-1. Log in to your Cherwell instance as admin.
-2. Click **Security** > **Edit REST API client settings**.
-3. Select **Create new client** > **client secret**.
-
- ![Cherwell user id](media/itsmc-connections-cherwell/itsmc-cherwell-client-id.png)
-
-## Next steps
-
-* [ITSM Connector Overview](itsmc-overview.md)
-* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Itsmc Connections Provance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-provance.md
- Title: Connect Provance with IT Service Management Connector
-description: This article provides information about how to Provance with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
- Previously updated : 2/23/2022----
-# Connect Provance with IT Service Management Connector
-
-This article provides information about how to configure the connection between your Provance instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items.
-
-> [!NOTE]
-> As of 1-Oct-2020 Provance ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
-> Existing ITSM connections will be supported.
-
-The following sections provide details about how to connect your Provance product to ITSMC in Azure.
-
-## Prerequisites
-
-Ensure the following prerequisites are met:
--- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](./itsmc-definition.md#install-it-service-management-connector).-- Provance App should be registered with Azure AD - and client ID is made available. For detailed information, see [how to configure active directory authentication](../../app-service/configure-authentication-provider-aad.md).--- User role: Administrator.-
-## Connection procedure
-
-Use the following procedure to create a Provance connection:
-
-1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)**
-
-2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**.
- ![New connection](media/itsmc-overview/add-new-itsm-connection.png)
-
-3. At the top of the right pane, click **Add**.
-
-4. Provide the information as described in the following table, and click **OK** to create the connection.
-
-> [!NOTE]
-> All these parameters are mandatory.
-
-| **Field** | **Description** |
-| | |
-| **Connection Name** | Type a name for the Provance instance that you want to connect with ITSMC. You use this name later when you configure work items in this ITSM/ view detailed log analytics. |
-| **Partner type** | Select **Provance**. |
-| **Username** | Type the user name that can connect to ITSMC. |
-| **Password** | Type the password associated with this user name. **Note:** User name and password are used for generating authentication tokens only, and are not stored anywhere within the ITSMC service.|
-| **Server URL** | Type the URL of your Provance instance that you want to connect to ITSMC. |
-| **Client ID** | Type the client ID for authenticating this connection, which you generated in your Provance instance. More information on client ID, see [how to configure active directory authentication](../../app-service/configure-authentication-provider-aad.md). |
-| **Data Sync Scope** | Select the Provance work items that you want to sync to Azure Log Analytics, through ITSMC. These work items are imported into log analytics. **Options:** Incidents, Change Requests.|
-| **Sync Data** | Type the number of past days that you want the data from. **Maximum limit**: 120 days. |
-| **Create new configuration item in ITSM solution** | Select this option if you want to create the configuration items in the ITSM product. When selected, ITSMC creates the affected CIs as configuration items (in case of non-existing CIs) in the supported ITSM system. **Default**: disabled.|
-
-![Screenshot that highlights the Connection Name and Partner Type lists.](media/itsmc-connections-provance/itsm-connections-provance-latest.png)
-
-**When successfully connected, and synced**:
--- Selected work items from this Provance instance are imported into Azure **Log Analytics.** You can view the summary of these work items on the IT Service Management Connector tile.--- You can create incidents from Log Analytics alerts or from log records, or from Azure alerts in this Provance instance.-
-## Next steps
-
-* [ITSM Connector Overview](itsmc-overview.md)
-* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md
- Title: IT Service Management Connector in Azure Monitor
-description: This article provides information about how to connect your ITSM products or services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
- Previously updated : 2/23/2022---
-# Connect ITSM products/services with IT Service Management Connector
-This article provides information about how to configure the connection between your ITSM product or service and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. For more information about ITSMC, see [Overview](./itsmc-overview.md).
-
-To set up your ITSM environment:
-1. Connect to your ITSM.
-
- - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
- - For SCSM, see [the System Center Service Manager connection instructions](/azure/azure-monitor/alerts/itsmc-connections).
-
- >[!NOTE]
- > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
- > Existing ITSM connections are supported.
-2. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519)
-For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
-
-## Next steps
-
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
ASP.NET **Core/Worker service apps**
> [!NOTE] > Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-processors) or [WorkerService](worker-service.md#adding-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
+For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
```csharp public void ConfigureServices(IServiceCollection services)
ASP.NET **Core/Worker service apps: Load your initializer**
> [!NOTE] > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#adding-telemetryinitializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
+For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
```csharp using Microsoft.ApplicationInsights.Extensibility;
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure app services performance .NET Core | Microsoft Docs
-description: Application performance monitoring for Azure app services using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs
+description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance.
Last updated 08/05/2021 ms.devlang: csharp
-# Application Monitoring for Azure App Service and ASP.NET Core
+# Application monitoring for Azure App Service and ASP.NET Core
-Enabling monitoring on your ASP.NET Core based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Enabling monitoring on your ASP.NET Core-based web applications running on [Azure App Service](../../app-service/index.yml) is now easier than ever. Previously, you needed to manually instrument your app. Now, the latest extension/agent is built into the App Service image by default. This article walks you through enabling Azure Monitor Application Insights monitoring. It also provides preliminary guidance for automating the process for large-scale deployments.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
For a complete list of supported auto-instrumentation scenarios, see [Supported
# [Windows](#tab/Windows) > [!IMPORTANT]
-> Only .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) is supported for auto-instrumentation on Windows.
+> Only .NET Core [Long Term Support](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) is supported for auto-instrumentation on Windows.
-> [!NOTE]
-> Auto-instrumentation used to be known as "codeless attach" before October 2021.
+[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is *not supported*. Use [manual instrumentation](./asp-net-core.md) via code instead.
-[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
+> [!NOTE]
+> Auto-instrumentation used to be known as "codeless attach" before October 2021.
-See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
+See the following [Enable monitoring](#enable-monitoring) section to begin setting up Application Insights with your App Service resource.
# [Linux](#tab/Linux) > [!IMPORTANT] > Only ASP.NET Core 6.0 is supported for auto-instrumentation on Linux.
-> [!NOTE]
-> Linux auto-instrumentation App Services portal enablement is in Public Preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
+[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is *not supported*. Use [manual instrumentation](./asp-net-core.md) via code instead.
-[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
+> [!NOTE]
+> Linux auto-instrumentation App Service portal enablement is in public preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
-See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
+See the following [Enable monitoring](#enable-monitoring) section to begin setting up Application Insights with your App Service resource.
-
-### Enable monitoring
+### Enable monitoring
-1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
+1. Select **Application Insights** in the left pane for your app service. Then select **Enable**.
- :::image type="content"source="./media/azure-web-apps/enable.png" alt-text=" Screenshot of Application Insights tab with enable selected.":::
+ :::image type="content"source="./media/azure-web-apps/enable.png" alt-text=" Screenshot that shows the Application Insights tab with Enable selected.":::
-2. Choose to create a new resource, or select an existing Application Insights resource for this application.
+1. Create a new resource or select an existing Application Insights resource for this application.
> [!NOTE]
- > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
-
- :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
+ > When you select **OK** to create a new resource, you're prompted to **Apply monitoring settings**. Selecting **Continue** links your new Application Insights resource to your app service. Your app service then restarts.
-2. After specifying which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core offers **Recommended collection** or **Disabled**.
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot that shows the Change your resource dropdown.":::
- :::image type="content"source="./media/azure-web-apps-net-core/instrument-net-core.png" alt-text=" Screenshot of instrument your application section.":::
+1. After you specify which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core collection options are **Recommended** or **Disabled**.
+ :::image type="content"source="./media/azure-web-apps-net-core/instrument-net-core.png" alt-text=" Screenshot that shows instrumenting your application section.":::
## Enable client-side monitoring
-Client-side monitoring is **enabled by default** for ASP.NET Core apps with **Recommended collection**, regardless of whether the app setting 'APPINSIGHTS_JAVASCRIPT_ENABLED' is present.
-
-If for some reason you would like to disable client-side monitoring:
+Client-side monitoring is enabled by default for ASP.NET Core apps with **Recommended** collection, regardless of whether the app setting `APPINSIGHTS_JAVASCRIPT_ENABLED` is present.
-* **Settings** **>** **Configuration**
- * Under Application settings, create a **new application setting**:
+If you want to disable client-side monitoring:
- name: `APPINSIGHTS_JAVASCRIPT_ENABLED`
+1. Select **Settings** > **Configuration**.
+1. Under **Application settings**, create a **New application setting** with the following information:
- Value: `false`
-
- * **Save** the settings and **Restart** your app.
+ - **Name**: `APPINSIGHTS_JAVASCRIPT_ENABLED`
+ - **Value**: `false`
+1. **Save** the settings. Restart your app.
## Automate monitoring
-To enable telemetry collection with Application Insights, only the Application settings need to be set:
-
+To enable telemetry collection with Application Insights, only the application settings must be set.
### Application settings definitions
To enable telemetry collection with Application Insights, only the Application s
|--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux | |XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. |
-|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with Application Insights SDK. Loads the extension side-by-side with the SDK and uses it to send telemetry (disables the Application Insights SDK). |`1`|
-
+|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with the Application Insights SDK. Loads the extension side by side with the SDK and uses it to send telemetry. (Disables the Application Insights SDK.) |`1`|
[!INCLUDE [azure-web-apps-arm-automation](../../../includes/azure-monitor-app-insights-azure-web-apps-arm-automation.md)]
+## Upgrade monitoring extension/agent - .NET
-## Upgrade monitoring extension/agent - .NET
+To upgrade the monitoring extension/agent, follow the steps in the next sections.
### Upgrade from versions 2.8.9 and up
Upgrading from version 2.8.9 happens automatically, without any extra actions. T
To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. ### Upgrade from versions 1.0.0 - 2.6.5
-Starting with version 2.8.9 the pre-installed site extension is used. If you're using an earlier version, you can update via one of two ways:
-
-* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
+Starting with version 2.8.9, the preinstalled site extension is used. If you're using an earlier version, you can update via one of two ways:
+* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring): Even if you have the Application Insights extension for App Service installed, the UI shows only the **Enable** button. Behind the scenes, the old private site extension will be removed.
* [Upgrade through PowerShell](#enable-through-powershell):
- 1. Set the application settings to enable the pre-installed site extension ApplicationInsightsAgent. See [Enable through PowerShell](#enable-through-powershell).
- 2. Manually remove the private site extension named Application Insights extension for Azure App Service.
+ 1. Set the application settings to enable the preinstalled site extension `ApplicationInsightsAgent`. For more information, see [Enable through PowerShell](#enable-through-powershell).
+ 1. Manually remove the private site extension named **Application Insights extension for Azure App Service**.
-If the upgrade is done from a version prior to 2.5.1, check that the ApplicationInsigths dlls are removed from the application bin folder [see troubleshooting steps](#troubleshooting).
+If the upgrade is done from a version prior to 2.5.1, check that the `ApplicationInsights` DLLs are removed from the application bin folder. For more information, see [Troubleshooting steps](#troubleshooting).
## Troubleshooting > [!NOTE]
-> When you create a web app with the `ASP.NET Core` runtimes in Azure App Services it deploys a single static HTML page as a starter website. It is **not** recommended to troubleshoot an issue with default template. Deploy an application before troubleshooting an issue.
+> When you create a web app with the `ASP.NET Core` runtimes in App Service, it deploys a single static HTML page as a starter website. We *do not* recommend that you troubleshoot an issue with the default template. Deploy an application before you troubleshoot an issue.
-Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET Core based applications running on Azure App Services.
+What follows is our step-by-step troubleshooting guide for extension/agent-based monitoring for ASP.NET Core-based applications running on App Service.
# [Windows](#tab/windows)
-1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
-2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
+1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`.
+1. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
- :::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot of the link above results page."border ="false":::
+ :::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot that shows the link above the results page."border ="false":::
- - Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
+ - Confirm that **Application Insights Extension Status** is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
+ If it isn't running, follow the instructions in the section [Enable Application Insights monitoring](#enable-auto-instrumentation-monitoring).
- - Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
+ - Confirm that the status source exists and looks like `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`.
- If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
+ If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application URL/application endpoints, which will allow the runtime information to become available.
- - Confirm that `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
+ - Confirm that **IKeyExists** is `True`. If it's `False`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
- - If your application refers to any Application Insights packages, enabling the App Service integration may not take effect and the data may not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights.
- -
+ - If your application refers to any Application Insights packages, enabling the App Service integration might not take effect and the data might not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in the portal, turn on **Interop with Application Insights SDK**. You'll start seeing the data in Application Insights.
+
> [!IMPORTANT]
- > This functionality is in preview
+ > This functionality is in preview.
- :::image type="content"source="./media/azure-web-apps-net-core/interop.png" alt-text=" Screenshot of interop setting enabled.":::
+ :::image type="content"source="./media/azure-web-apps-net-core/interop.png" alt-text=" Screenshot that shows the interop setting enabled.":::
- The data is now going to be sent using codeless approach even if Application Insights SDK was originally used or attempted to be used.
+ The data will now be sent by using a codeless approach, even if the Application Insights SDK was originally used or attempted to be used.
> [!IMPORTANT]
- > If the application used Application Insights SDK to send any telemetry, such telemetry will be disabled ΓÇô in other words, custom telemetry - if any, such as for example any Track*() methods, and any custom settings, such as sampling, will be disabled.
+ > If the application used the Application Insights SDK to send any telemetry, the telemetry will be disabled. In other words, custom telemetry (for example, any `Track*()` methods) and custom settings (such as sampling) will be disabled.
# [Linux](#tab/linux)
-1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2"
-1. Browse to https:// your site name .scm.azurewebsites.net/ApplicationInsights
+1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`.
+1. Browse to `https://your site name.scm.azurewebsites.net/ApplicationInsights`.
1. Within this site, confirm:
- * The status source exists and looks like: `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`
- * `Auto-Instrumentation enabled successfully`, is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
- * `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
+ * The status source exists and looks like `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`.
+ * The value `Auto-Instrumentation enabled successfully` is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application URL/application endpoints, which will allow the runtime information to become available.
+ * **IKeyExists** is `True`. If it's `False`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
+ :::image type="content" source="media/azure-web-apps-net-core/auto-instrumentation-status.png" alt-text="Screenshot that shows the auto-instrumentation status webpage." lightbox="media/azure-web-apps-net-core/auto-instrumentation-status.png":::
- ### Default website deployed with web apps doesn't support automatic client-side monitoring
-When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET managed web part in IIS. This behavior allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
+When you create a web app with the ASP.NET Core runtimes in App Service, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET-managed web part in IIS. This behavior allows for testing codeless server-side monitoring but doesn't support automatic client-side monitoring.
-If you wish to test out codeless server and client-side monitoring for ASP.NET Core in an Azure App Services web app, we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring.
+If you want to test out codeless server and client-side monitoring for ASP.NET Core in an App Service web app, we recommend that you follow the official guides for [creating an ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring.
[!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)]
If you wish to test out codeless server and client-side monitoring for ASP.NET C
### PHP and WordPress aren't supported
-PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md).
+PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. To manually instrument client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your webpages, use the [JavaScript SDK](./javascript.md).
-The table below provides a more detailed explanation of what these values mean, their underlying causes, and recommended fixes:
+The following table provides an explanation of what these values mean, their underlying causes, and recommended fixes.
-|Problem Value |Explanation |Fix |
+|Problem value |Explanation |Fix |
|- |-||
-| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio reference `Microsoft.ApplicationInsights`. |
-|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of Microsoft.ApplicationsInsights dll in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). |
-|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
+| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the application and will back off. It can be because of a reference to `Microsoft.ApplicationInsights.AspNetCore` or `Microsoft.ApplicationInsights`. | Remove the references. Some of these references are added by default from certain Visual Studio templates. Older versions of Visual Studio reference `Microsoft.ApplicationInsights`. |
+|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of `Microsoft.ApplicationsInsights` DLL in the app folder from a previous deployment. | Clean the app folder to ensure that these DLLs are removed. Check both your local app's bin directory and the *wwwroot* directory on the App Service. (To check the wwwroot directory of your App Service web app, select **Advanced Tools (Kudu**) > **Debug console** > **CMD** > **home\site\wwwroot**). |
+|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the app setting `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes include accidentally removing the values or forgetting to set the values in automation script. | Make sure the setting is present in the App Service application settings. |
## Release notes
-For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
+For the latest updates and bug fixes, see the [Release notes](web-app-extension-release-notes.md).
## Next steps
-* [Run the profiler on your live app](./profiler.md).
+
+* [Run the Profiler on your live app](./profiler.md).
* [Monitor Azure Functions with Application Insights](monitor-functions.md). * [Enable Azure diagnostics](../agents/diagnostics-extension-to-application-insights.md) to be sent to Application Insights. * [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold.
-* Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page.
+* Use [Application Insights for JavaScript apps and webpages](javascript.md) to get client telemetry from the browsers that visit a webpage.
* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
Title: Azure Application Insights Telemetry Data Model | Microsoft Docs
-description: Application Insights data model overview
+ Title: Application Insights telemetry data model | Microsoft Docs
+description: This article presents an overview of the Application Insights telemetry data model.
documentationcenter: .net
# Application Insights telemetry data model
-[Azure Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal, so that you can analyze the performance and usage of your application. The telemetry model is standardized so that it is possible to create platform and language-independent monitoring.
+[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring.
-Data collected by Application Insights models this typical application execution pattern:
+Data collected by Application Insights models this typical application execution pattern.
-![Application Insights Application Model](./media/data-model/application-insights-data-model.png)
+![Diagram that shows an Application Insights telemetry data model.](./media/data-model/application-insights-data-model.png)
-The following types of telemetry are used to monitor the execution of your app. The following three types are typically automatically collected by the Application Insights SDK from the web application framework:
+The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework:
-* [**Request**](data-model-request-telemetry.md) - Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives.
+* [Request](data-model-request-telemetry.md): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives.
- An **Operation** is the threads of execution that processes a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. This ID that can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails, and has a duration of time.
-* [**Exception**](data-model-exception-telemetry.md) - Typically represents an exception that causes an operation to fail.
-* [**Dependency**](data-model-dependency-telemetry.md) - Represents a call from your app to an external service or storage such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`.
+ An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time.
+* [Exception](data-model-exception-telemetry.md): Typically represents an exception that causes an operation to fail.
+* [Dependency](data-model-dependency-telemetry.md): Represents a call from your app to an external service or storage, such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`.
-Application Insights provides three additional data types for custom telemetry:
+Application Insights provides three data types for custom telemetry:
-* [Trace](data-model-trace-telemetry.md) - used either directly, or through an adapter to implement diagnostics logging using an instrumentation framework that is familiar to you, such as `Log4Net` or `System.Diagnostics`.
-* [Event](data-model-event-telemetry.md) - typically used to capture user interaction with your service, to analyze usage patterns.
-* [Metric](data-model-metric-telemetry.md) - used to report periodic scalar measurements.
+* [Trace](data-model-trace-telemetry.md): Used either directly or through an adapter to implement diagnostics logging by using an instrumentation framework that's familiar to you, such as `Log4Net` or `System.Diagnostics`.
+* [Event](data-model-event-telemetry.md): Typically used to capture user interaction with your service to analyze usage patterns.
+* [Metric](data-model-metric-telemetry.md): Used to report periodic scalar measurements.
-Every telemetry item can define the [context information](data-model-context.md) like application version or user session id. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment. Session id can be used to calculate the outage or an issue impact on users. Calculating distinct count of session id values for certain failed dependency, error trace or critical exception gives a good understanding of an impact.
+Every telemetry item can define the [context information](data-model-context.md) like application version or user session ID. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment.
-Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which itΓÇÖs a part. For example, a request can make a SQL Database calls and recorded diagnostics info. You can set the correlation context for those telemetry items that tie it back to the request telemetry.
+You can use session ID to calculate an outage or an issue impact on users. Calculating the distinct count of session ID values for a specific failed dependency, error trace, or critical exception gives you a good understanding of an impact.
+
+The Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry.
## Schema improvements
-Application Insights data model is a simple and basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow to extend the schema for advanced use.
+The Application Insights data model is a basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow the schema to be extended for advanced use.
-[To report data model or schema problems and suggestions use our GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose).
+To report data model or schema problems and suggestions, use our [GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose).
## Next steps -- [Write custom telemetry](./api-custom-events-metrics.md)
+- [Write custom telemetry](./api-custom-events-metrics.md).
- Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- Use [sampling](./sampling.md) to minimize amount of telemetry based on data model.
+- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.
- Check out [platforms](./platforms.md) supported by Application Insights.-
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
To get a list of well known counters published by the .NET Runtime, see [Availab
## Customizing counters to be collected
-The following example shows how to add/remove counters. This customization would be done in the `ConfigureServices` method of your application after Application Insights telemetry collection is enabled using either `AddApplicationInsightsTelemetry()` or `AddApplicationInsightsWorkerService()`. Following is an example code from an ASP.NET Core application. For other type of applications, refer to [this](worker-service.md#configuring-or-removing-default-telemetrymodules) document.
+The following example shows how to add/remove counters. This customization would be done in the `ConfigureServices` method of your application after Application Insights telemetry collection is enabled using either `AddApplicationInsightsTelemetry()` or `AddApplicationInsightsWorkerService()`. Following is an example code from an ASP.NET Core application. For other type of applications, refer to [this](worker-service.md#configure-or-remove-default-telemetry-modules) document.
```csharp using Microsoft.ApplicationInsights.Extensibility.EventCounterCollector;
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Title: Continuous export of telemetry from Application Insights | Microsoft Docs
-description: Export diagnostic and usage data to storage in Microsoft Azure, and download it from there.
+description: Export diagnostic and usage data to storage in Azure and download it from there.
Last updated 10/24/2022
# Export telemetry from Application Insights
-Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it.
+Do you want to keep your telemetry for longer than the standard retention period? Or do you want to process it in some specialized way? Continuous export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Azure in JSON format. From there, you can download your data and write whatever code you need to process it.
> [!IMPORTANT] > * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.
-> * When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
-> * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
+> * When you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
+> * Diagnostic settings export might increase costs. For more information, see [Diagnostic settings-based export](export-telemetry.md#diagnostic-settings-based-export).
Before you set up continuous export, there are some alternatives you might want to consider:
-* The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet.
+* The **Export** button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet.
+* [Log Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results.
+* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using continuous export if you've [migrated to a workspace-based resource](convert-classic-resource.md).
+* The [Data Access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically.
+* You can also access setup for [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport).
-* [Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results.
-* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using Continuous Export if you've [migrated to a workspace-based resource](convert-classic-resource.md).
-* The [Data access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically.
-* You can also access setup [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport).
+After continuous export copies your data to storage, where it can stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md).
-After continuous export copies your data to storage, where it may stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md).
+## Supported regions
-## Supported Regions
-
-Continuous Export is supported in the following regions:
+Continuous export is supported in the following regions:
* Southeast Asia * Canada Central
Continuous Export is supported in the following regions:
* Japan West > [!NOTE]
-> Continuous Export will continue to work for Applications in **East US** and **West Europe** if the export was configured before February 23, 2021. New Continuous Export rules cannot be configured on any application in **East US** or **West Europe**, regardless of when the application was created.
-
-## Continuous Export advanced storage configuration
+> Continuous export will continue to work for applications in East US and West Europe if the export was configured before February 23, 2021. New continuous export rules can't be configured on any application in East US or West Europe, no matter when the application was created.
-Continuous Export **does not support** the following Azure storage features/configurations:
+## Continuous export advanced storage configuration
-* Use of [VNET/Azure Storage firewalls](../../storage/common/storage-network-security.md) with Azure Blob storage.
+Continuous export *doesn't support* the following Azure Storage features or configurations:
+* Use of [Azure Virtual Network/Azure Storage firewalls](../../storage/common/storage-network-security.md) with Azure Blob Storage.
* [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).
-## <a name="setup"></a> Create a Continuous Export
+## <a name="setup"></a> Create a continuous export
> [!NOTE]
-> An application cannot export more than 3TB of data per day. If more than 3TB per day is exported, the export will be disabled. To export without a limit use [diagnostic settings based export](#diagnostic-settings-based-export).
+> An application can't export more than 3 TB of data per day. If more than 3 TB per day is exported, the export will be disabled. To export without a limit, use [diagnostic settings-based export](#diagnostic-settings-based-export).
-1. In the Application Insights resource for your app under configure on the left, open Continuous Export and choose **Add**:
+1. In the Application Insights resource for your app under **Configure** on the left, open **Continuous export** and select **Add**.
-2. Choose the telemetry data types you want to export.
+1. Choose the telemetry data types you want to export.
-3. Create or select an [Azure storage account](../../storage/common/storage-introduction.md) where you want to store the data. For more information on storage pricing options, visit the [official pricing page](https://azure.microsoft.com/pricing/details/storage/).
+1. Create or select an [Azure Storage account](../../storage/common/storage-introduction.md) where you want to store the data. For more information on storage pricing options, see the [Pricing page](https://azure.microsoft.com/pricing/details/storage/).
- Select Add, Export Destination, Storage account, and then either create a new store or choose an existing store.
+ Select **Add** > **Export destination** > **Storage account**. Then either create a new store or choose an existing store.
> [!Warning]
- > By default, the storage location will be set to the same geographical region as your Application Insights resource. If you store in a different region, you may incur transfer charges.
+ > By default, the storage location will be set to the same geographical region as your Application Insights resource. If you store in a different region, you might incur transfer charges.
-4. Create or select a container in the storage.
+1. Create or select a container in the storage.
> [!NOTE]
-> Once you've created your export, newly ingested data will begin to flow to Azure Blob storage. Continuous export will only transmit new telemetry that is created/ingested after continuous export was enabled. Any data that existed prior to enabling continuous export will not be exported, and there is no supported way to retroactively export previously created data using continuous export.
+> After you've created your export, newly ingested data will begin to flow to Azure Blob Storage. Continuous export only transmits new telemetry that's created or ingested after continuous export was enabled. Any data that existed prior to enabling continuous export won't be exported. There's no supported way to retroactively export previously created data by using continuous export.
There can be a delay of about an hour before data appears in the storage.
-Once the first export is complete, you'll find the following structure in your Azure Blob storage container: (This structure will vary depending on the data you're collecting.)
+After the first export is finished, you'll find the following structure in your Blob Storage container. (This structure varies depending on the data you're collecting.)
|Name | Description | |:-|:| | [Availability](export-data-model.md#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |
-| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
+| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
| [Exceptions](export-data-model.md#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser. | [Messages](export-data-model.md#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md). | [Metrics](export-data-model.md#metrics) | Generated by metric API calls. | [PerformanceCounters](export-data-model.md) | Performance Counters collected by Application Insights. | [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
-### To edit continuous export
+### Edit continuous export
-Select continuous export and select the storage account to edit.
+Select **Continuous export** and select the storage account to edit.
-### To stop continuous export
+### Stop continuous export
-To stop the export, select Disable. When you select Enable again, the export will restart with new data. You won't get the data that arrived in the portal while export was disabled.
+To stop the export, select **Disable**. When you select **Enable** again, the export restarts with new data. You won't get the data that arrived in the portal while export was disabled.
To stop the export permanently, delete it. Doing so doesn't delete your data from storage. ### Can't add or change an export?
-* To add or change exports, you need Owner, Contributor, or Application Insights Contributor access rights. [Learn about roles][roles].
+
+To add or change exports, you need Owner, Contributor, or Application Insights Contributor access rights. [Learn about roles][roles].
## <a name="analyze"></a> What events do you get? The exported data is the raw telemetry we receive from your application with added location data from the client IP address.
Other calculated metrics aren't included. For example, we don't export average C
The data also includes the results of any [availability web tests](./monitor-web-app-availability.md) that you have set up. > [!NOTE]
-> **Sampling.** If your application sends a lot of data, the sampling feature may operate and send only a fraction of the generated telemetry. [Learn more about sampling.](./sampling.md)
->
+> If your application sends a lot of data, the sampling feature might operate and send only a fraction of the generated telemetry. [Learn more about sampling.](./sampling.md)
> ## <a name="get"></a> Inspect the data
-You can inspect the storage directly in the portal. Select home in the leftmost menu, at the top where it says "Azure services" select **Storage accounts**, select the storage account name, on the overview page select **Blobs** under services, and finally select the container name.
+You can inspect the storage directly in the portal. Select **Home** on the leftmost menu. At the top where it says **Azure services**, select **Storage accounts**. Select the storage account name, and on the **Overview** page select **Services** > **Blobs**. Finally, select the container name.
-To inspect Azure storage in Visual Studio, open **View**, **Cloud Explorer**. (If you don't have that menu command, you need to install the Azure SDK: Open the **New Project** dialog, expand Visual C#/Cloud and choose **Get Microsoft Azure SDK for .NET**.)
+To inspect Azure Storage in Visual Studio, select **View** > **Cloud Explorer**. If you don't have that menu command, you need to install the Azure SDK. Open the **New Project** dialog, expand **Visual C#/Cloud**, and select **Get Microsoft Azure SDK for .NET**.
-When you open your blob store, you'll see a container with a set of blob files. The URI of each file derived from your Application Insights resource name, its instrumentation key, telemetry-type/date/time. (The resource name is all lowercase, and the instrumentation key omits dashes.)
+When you open your blob store, you'll see a container with a set of blob files. You'll see the URI of each file derived from your Application Insights resource name, its instrumentation key, and telemetry type, date, and time. The resource name is all lowercase, and the instrumentation key omits dashes.
-![Inspect the blob store with a suitable tool](./media/export-telemetry/04-data.png)
+![Screenshot that shows inspecting the blob store with a suitable tool.](./media/export-telemetry/04-data.png)
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-The date and time are UTC and are when the telemetry was deposited in the store - not the time it was generated. So if you write code to download the data, it can move linearly through the data.
+The date and time are UTC and are when the telemetry was deposited in the store, not the time it was generated. For this reason, if you write code to download the data, it can move linearly through the data.
Here's the form of the path:
Here's the form of the path:
$"{applicationName}_{instrumentationKey}/{type}/{blobDeliveryTimeUtc:yyyy-MM-dd}/{ blobDeliveryTimeUtc:HH}/{blobId}_{blobCreationTimeUtc:yyyyMMdd_HHmmss}.blob" ```
-Where
+Where:
-* `blobCreationTimeUtc` is the time when blob was created in the internal staging storage
-* `blobDeliveryTimeUtc` is the time when blob is copied to the export destination storage
+* `blobCreationTimeUtc` is the time when the blob was created in the internal staging storage.
+* `blobDeliveryTimeUtc` is the time when the blob is copied to the export destination storage.
## <a name="format"></a> Data format
-* Each blob is a text file that contains multiple '\n'-separated rows. It contains the telemetry processed over a time period of roughly half a minute.
-* Each row represents a telemetry data point such as a request or page view.
-* Each row is an unformatted JSON document. If you want to view the rows, open the blob in Visual Studio and choose **Edit** > **Advanced** > **Format File**:
- ![View the telemetry with a suitable tool](./media/export-telemetry/06-json.png)
+The data is formatted so that:
+
+* Each blob is a text file that contains multiple `\n`-separated rows. It contains the telemetry processed over a time period of roughly half a minute.
+* Each row represents a telemetry data point, such as a request or page view.
+* Each row is an unformatted JSON document. If you want to view the rows, open the blob in Visual Studio and select **Edit** > **Advanced** > **Format File**.
+
+ ![Screenshot that shows viewing the telemetry with a suitable tool](./media/export-telemetry/06-json.png)
Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these values show a time of 1 ms to send a request from the browser, 3 ms to receive it, and 1.8 s to process the page in the browser:
Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these value
"clientProcess": {"value": 17970000.0} ```
-[Detailed data model reference for the property types and values.](export-data-model.md)
+For a detailed data model reference for the property types and values, see [Application Insights export data model](export-data-model.md).
-## Processing the data
-On a small scale, you can write some code to pull apart your data, read it into a spreadsheet, and so on. For example:
+## Process the data
+On a small scale, you can write some code to pull apart your data and read it into a spreadsheet. For example:
```csharp private IEnumerable<T> DeserializeMany<T>(string folderName)
private IEnumerable<T> DeserializeMany<T>(string folderName)
} ```
-For a larger code sample, see [using a worker role][exportasa].
+For a larger code sample, see [Using a worker role][exportasa].
## <a name="delete"></a>Delete your old data
-You're responsible for managing your storage capacity and deleting the old data if necessary.
+You're responsible for managing your storage capacity and deleting old data, if necessary.
-## If you regenerate your storage key...
+## Regenerate your storage key
If you change the key to your storage, continuous export will stop working. You'll see a notification in your Azure account.
-Open the Continuous Export tab and edit your export. Edit the Export Destination, but just leave the same storage selected. Select OK to confirm.
+Select the **Continuous Export** tab and edit your export. Edit the**Export Destination** value, but leave the same storage selected. Select **OK** to confirm.
-The continuous export will restart.
+Continuous export will restart.
## Export samples
+For export samples, see:
+ * [Export to SQL using Stream Analytics][exportasa] * [Stream Analytics sample 2](../../stream-analytics/app-insights-export-stream-analytics.md)
-On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) - Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that has been exported from Application Insights.
+On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that's been exported from Application Insights.
## Q & A
-* *But all I want is a one-time download of a chart.*
- Yes, you can do that. At the top of the tab, select **Export Data**.
-* *I set up an export, but there's no data in my store.*
+This section provides answers to common questions.
+
+### Can I get a one-time download of a chart?
+
+You can do that. At the top of the tab, select **Export Data**.
+
+### I set up an export, but why is there no data in my store?
- Did Application Insights receive any telemetry from your app since you set up the export? You'll only receive new data.
-* *I tried to set up an export, but was denied access*
+Did Application Insights receive any telemetry from your app since you set up the export? You'll only receive new data.
- If the account is owned by your organization, you have to be a member of the owners or contributors groups.
-* *Can I export straight to my own on-premises store?*
+### I tried to set up an export, but why was I denied access?
- No, sorry. Our export engine currently only works with Azure storage at this time.
-* *Is there any limit to the amount of data you put in my store?*
+If the account is owned by your organization, you have to be a member of the Owners or Contributors groups.
- No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for blob storage, but that's huge. It's up to you to control how much storage you use.
-* *How many blobs should I see in the storage?*
+### Can I export straight to my own on-premises store?
- * For every data type you selected to export, a new blob is created every minute (if data is available).
- * In addition, for applications with high traffic, extra partition units are allocated. In this case, each unit creates a blob every minute.
-* *I regenerated the key to my storage or changed the name of the container, and now the export doesn't work.*
+No. Our export engine currently only works with Azure Storage at this time.
- Edit the export and open the export destination tab. Leave the same storage selected as before, and select OK to confirm. Export will restart. If the change was within the past few days, you won't lose data.
-* *Can I pause the export?*
+### Is there any limit to the amount of data you put in my store?
- Yes. Select Disable.
+No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for Blob Storage, but that limit is huge. It's up to you to control how much storage you use.
+
+### How many blobs should I see in the storage?
+
+ * For every data type you selected to export, a new blob is created every minute, if data is available.
+ * For applications with high traffic, extra partition units are allocated. In this case, each unit creates a blob every minute.
+
+### I regenerated the key to my storage, or changed the name of the container, but why doesn't the export work?
+
+Edit the export and select the **Export destination** tab. Leave the same storage selected as before, and select **OK** to confirm. Export will restart. If the change was within the past few days, you won't lose data.
+
+### Can I pause the export?
+
+Yes. Select **Disable**.
## Code samples * [Stream Analytics sample](../../stream-analytics/app-insights-export-stream-analytics.md)
-* [Export to SQL using Stream Analytics][exportasa]
-* [Detailed data model reference for the property types and values.](export-data-model.md)
+* [Export to SQL by using Stream Analytics][exportasa]
+* [Detailed data model reference for property types and values](export-data-model.md)
-## Diagnostic settings based export
+## Diagnostic settings-based export
-Diagnostic settings export is preferred because it provides extra features.
+Diagnostic settings export is preferred because it provides extra features:
> [!div class="checklist"]
- > * Azure storage accounts with virtual networks, firewalls, and private links
- > * Export to Event Hubs
+ > * Azure Storage accounts with virtual networks, firewalls, and private links.
+ > * Export to Azure Event Hubs.
Diagnostic settings export further differs from continuous export in the following ways: * Updated schema. * Telemetry data is sent as it arrives instead of in batched uploads.+ > [!IMPORTANT]
- > Additional costs may be incurred due to an increase in calls to the destination, such as a storage account.
+ > Extra costs might be incurred because of an increase in calls to the destination, such as a storage account.
To migrate to diagnostic settings export: 1. Disable current continuous export.
-2. [Migrate application to workspace-based](convert-classic-resource.md).
-3. [Enable diagnostic settings export](create-workspace-resource.md#export-telemetry). Select **Diagnostic settings > add diagnostic setting** from within your Application Insights resource.
+1. [Migrate application to workspace based](convert-classic-resource.md).
+1. [Enable diagnostic settings export](create-workspace-resource.md#export-telemetry). Select **Diagnostic settings** > **Add diagnostic setting** from within your Application Insights resource.
> [!CAUTION]
-> If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights:
+> If you want to store diagnostic logs in a Log Analytics workspace, there are two points to consider to avoid seeing duplicate data in Application Insights:
+>
> * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.
-> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](../logs/log-analytics-workspace-overview.md#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on.
->
-> These steps are necessary because Application Insights accesses telemetry across Application Insight resources (including Log Analytics workspaces) to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources containing the same data.
+> * The Application Insights user can't have access to both workspaces. Set the Log Analytics [access control mode](../logs/log-analytics-workspace-overview.md#permissions) to **Requires workspace permissions**. Through [Azure role-based access control](./resources-roles-access-control.md), ensure the user only has access to the Log Analytics workspace the Application Insights resource is based on.
+>
+> These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data.
<!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md
-[roles]: ./resources-roles-access-control.md
+[roles]: ./resources-roles-access-control.md
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Azure Application Insights IP address collection | Microsoft Docs
+ Title: Application Insights IP address collection | Microsoft Docs
description: Understand how Application Insights handles IP addresses and geolocation. Last updated 09/23/2020
This article explains how geolocation lookup and IP address handling work in App
## Default behavior
-By default, IP addresses are temporarily collected but not stored in Application Insights. The basic process is as follows:
+By default, IP addresses are temporarily collected but not stored in Application Insights. This process follows some basic steps.
-When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
+When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
-Geolocation data can be removed in the following ways.
+To remove geolocation data, see the following articles:
* [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md) * [Use a custom initializer](../app/api-filtering-sampling.md) The telemetry types are:
-* Browser telemetry: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.
-* Server telemetry: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming list of IP address has more than one item, the last IP address is used to populate geolocation fields.
+* **Browser telemetry**: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.
+* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields.
-This behavior is by design to help avoid unnecessary collection of personal data and ip address location information. Whenever possible, we recommend avoiding the collection of personal data.
+This behavior is by design to help avoid unnecessary collection of personal data and IP address location information. Whenever possible, we recommend avoiding the collection of personal data.
> [!NOTE]
-> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations.
+> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations.
>
-> To learn more about handling personal data in Application Insights, consult the [guidance for personal data](../logs/personal-data-mgmt.md).
+> To learn more about handling personal data in Application Insights, see [Guidance for personal data](../logs/personal-data-mgmt.md).
-While not collecting ip addresses will also not collect city and other geolocation attributes are populated by our pipeline by using the IP address, you can also mask IP collection at the source. This can be done by either removing the client IP initializer [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md), or providing your own custom initializer. For more information, see [API Filtering example.](api-filtering-sampling.md).
+When IP addresses aren't collected, city and other geolocation attributes populated by our pipeline by using the IP address also aren't collected. You can mask IP collection at the source. There are two ways to do it. You can:
+* Remove the client IP initializer. For more information, see [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md).
+* Provide your own custom initializer. For more information, see an [API filtering example](api-filtering-sampling.md).
## Storage of IP address data
-To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates or by calling the REST API.
+To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates (ARM templates) or by calling the REST API.
-### Azure Resource Manager template
+### ARM template
```json {
To enable IP collection and storage, the `DisableIpMasking` property of the Appl
### Portal
-If you only need to modify the behavior for a single Application Insights resource, use the Azure portal.
+If you need to modify the behavior for only a single Application Insights resource, use the Azure portal.
-1. Go your Application Insights resource, and then select **Automation** > **Export Template**.
+1. Go to your Application Insights resource, and then select **Automation** > **Export template**.
-2. Select **Deploy**.
+1. Select **Deploy**.
- ![Screenshot that shows the Deploy button highlighted in red.](media/ip-collection/deploy.png)
+ ![Screenshot that shows the Deploy button.](media/ip-collection/deploy.png)
-3. Select **Edit Template**.
+1. Select **Edit template**.
- ![Screenshot that shows the Edit button highlighted in red, along with a warning about the resource group.](media/ip-collection/edit-template.png)
+ ![Screenshot that shows the Edit button, along with a warning about the resource group.](media/ip-collection/edit-template.png)
> [!NOTE]
- > If you experience the following error (as shown in the screenshot), you can resolve it: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group.
+ > If you experience the error shown in the preceding screenshot, you can resolve it. It states: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group.
-4. In the JSON template locate `properties` inside `resources`, add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**.
+1. In the JSON template, locate `properties` inside `resources`. Add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**.
![Screenshot that shows the addition of a comma and a new line after the property for request source.](media/ip-collection/save.png)
-5. Select **Review + create** > **Create**.
+1. Select **Review + create** > **Create**.
> [!NOTE] > If you see "Your deployment failed," look through your deployment details for the one with the type `microsoft.insights/components` and check the status. If that one succeeds, the changes made to `DisableIpMasking` were deployed.
-6. After the deployment is complete, new telemetry data will be recorded.
+1. After the deployment is complete, new telemetry data will be recorded.
If you select and edit the template again, you'll see only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands:
If you only need to modify the behavior for a single Application Insights resour
$AppInsights.Properties ```
- A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before deploying the new property with Azure Resource Manager, the property won't exist.
+ A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property won't exist.
### REST API
-The [REST API](/rest/api/azure/) payload to make the same modifications is as follows:
+The following [REST API](/rest/api/azure/) payload makes the same modifications:
``` PATCH https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/microsoft.insights/components/<resource-name>?api-version=2018-05-01-preview HTTP/1.1
Content-Length: 54
## Telemetry initializer
-If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field.
+If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field.
# [.NET](#tab/net)
You can create your telemetry initializer the same way for ASP.NET Core as for A
services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); } ```+ # [Node.js](#tab/nodejs) ### Node.js
appInsights.defaultClient.addTelemetryProcessor((envelope) => {
### Client-side JavaScript
-Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure.
-
-If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default.
+Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure.
-To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic.
+If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default.
+To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic.
```javascript appInsights.addTelemetryInitializer((item) => {
appInsights.addTelemetryInitializer((item) => {
```
-If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client.
+If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client.
requests
| project appName, operation_Name, url, resultCode, client_IP, customDimensions.["client-ip"] ```
-Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out.
+Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out.
If you're testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. The `::1` value represents the loopback address in IPv6. It's equivalent to `127.0.0.1` in IPv4. ## Next steps * Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Application Insights.-
-* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) in Application Insights works. This article an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`, but it goes into greater depth on the mechanics of the built-in telemetry initializer.
+* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) works in Application Insights. This article is an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`. The article goes into greater depth on the mechanics of the built-in telemetry initializer.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Next, add the following line before the call `services.AddApplicationInsightsTel
services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); ```
-More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configuring-or-removing-default-telemetrymodules).
+More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules).
#### Azure Function Apps
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Metric counts such as request rate and exception rate are adjusted to compensate
> [!NOTE] > This section applies to ASP.NET applications, not to ASP.NET Core applications. [Learn about configuring adaptive sampling for ASP.NET Core applications later in this document.](#configuring-adaptive-sampling-for-aspnet-core-applications)
-> With ASP.NET Core and with Microsoft.ApplicationInsights.AspNetCore >= 2.15.0 you can configure AppInsights options via appsettings.json
- In [`ApplicationInsights.config`](./configuration-with-applicationinsights-config.md), you can adjust several parameters in the `AdaptiveSamplingTelemetryProcessor` node. The figures shown are the default values: * `<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>`
builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Depend
### Configuring adaptive sampling for ASP.NET Core applications
-There's no `ApplicationInsights.config` for ASP.NET Core applications, so all configuration is done via code.
+ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](https://learn.microsoft.com/aspnet/core/fundamentals/configuration).
+ Adaptive sampling is enabled by default for all ASP.NET Core applications. You can disable or customize the sampling behavior. #### Turning off adaptive sampling
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
# Application Insights for Worker Service applications (non-HTTP applications)
-[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK, which is best suited for non-HTTP workloads like messaging, background tasks, console applications etc. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core Web Application, and hence using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications isn't supported.
+[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK, which is best suited for non-HTTP workloads like messaging, background tasks, and console applications. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core web application. For this reason, using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications isn't supported.
-The new SDK doesn't do any telemetry collection by itself. Instead, it brings in other well known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights) etc. This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection.
+The new SDK doesn't do any telemetry collection by itself. Instead, it brings in other well-known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), and [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights). This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection.
## Supported scenarios
-The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core](/aspnet/core/fundamentals/host/hosted-services), Console apps (.NET Core/ .NET Framework), etc.
+The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core](/aspnet/core/fundamentals/host/hosted-services), and console apps like .NET Core and .NET Framework.
## Prerequisites
-A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
+You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Using Application Insights SDK for Worker Services
+## Use Application Insights SDK for Worker Service
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
- The following snippet shows the changes that need to be added to your project's `.csproj` file.
-
-```xml
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.13.1" />
- </ItemGroup>
-```
+ The following snippet shows the changes that must be added to your project's `.csproj` file:
+
+ ```xml
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.13.1" />
+ </ItemGroup>
+ ```
-1. Configure the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or in configuration. (`appsettings.json`)
+1. Configure the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or in configuration (`appsettings.json`).
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
-1. Retrieve an `ILogger` instance or `TelemetryClient` instance from the Dependency Injection (DI) container by calling `serviceProvider.GetRequiredService<TelemetryClient>();` or using Constructor Injection. This step will trigger setting up of `TelemetryConfiguration` and auto collection modules.
+1. Retrieve an `ILogger` instance or `TelemetryClient` instance from the Dependency Injection (DI) container by calling `serviceProvider.GetRequiredService<TelemetryClient>();` or by using Constructor Injection. This step will trigger setting up of `TelemetryConfiguration` and auto-collection modules.
Specific instructions for each type of application are described in the following sections.
-## .NET Core LTS worker service application
-
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
-
-1. Download and install .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-2. Create a new Worker Service project either by using Visual Studio new project template or command line `dotnet new worker`
-3. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
+## .NET Core LTS Worker Service application
-4. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `CreateHostBuilder()` method in your `Program.cs` class, as in this example:
+The full example is shared at the [NuGet website](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService).
-```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureServices((hostContext, services) =>
- {
- services.AddHostedService<Worker>();
- services.AddApplicationInsightsTelemetryWorkerService();
- });
-```
-
-5. Modify your `Worker.cs` as per below example.
+1. Download and install .NET Core [Long Term Support (LTS)](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+1. Create a new Worker Service project either by using a Visual Studio new project template or the command line `dotnet new worker`.
+1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
-```csharp
- using Microsoft.ApplicationInsights;
- using Microsoft.ApplicationInsights.DataContracts;
+1. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `CreateHostBuilder()` method in your `Program.cs` class, as in this example:
- public class Worker : BackgroundService
- {
- private readonly ILogger<Worker> _logger;
- private TelemetryClient _telemetryClient;
- private static HttpClient _httpClient = new HttpClient();
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureServices((hostContext, services) =>
+ {
+ services.AddHostedService<Worker>();
+ services.AddApplicationInsightsTelemetryWorkerService();
+ });
+ ```
- public Worker(ILogger<Worker> logger, TelemetryClient tc)
- {
- _logger = logger;
- _telemetryClient = tc;
- }
+1. Modify your `Worker.cs` as per the following example:
- protected override async Task ExecuteAsync(CancellationToken stoppingToken)
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.DataContracts;
+
+ public class Worker : BackgroundService
{
- while (!stoppingToken.IsCancellationRequested)
+ private readonly ILogger<Worker> _logger;
+ private TelemetryClient _telemetryClient;
+ private static HttpClient _httpClient = new HttpClient();
+
+ public Worker(ILogger<Worker> logger, TelemetryClient tc)
{
- _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
-
- using (_telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ _logger = logger;
+ _telemetryClient = tc;
+ }
+
+ protected override async Task ExecuteAsync(CancellationToken stoppingToken)
+ {
+ while (!stoppingToken.IsCancellationRequested)
{
- _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
- _logger.LogInformation("Calling bing.com");
- var res = await _httpClient.GetAsync("https://bing.com");
- _logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
- _telemetryClient.TrackEvent("Bing call event completed");
+ _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
+
+ using (_telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ {
+ _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
+ _logger.LogInformation("Calling bing.com");
+ var res = await _httpClient.GetAsync("https://bing.com");
+ _logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
+ _telemetryClient.TrackEvent("Bing call event completed");
+ }
+
+ await Task.Delay(1000, stoppingToken);
}-
- await Task.Delay(1000, stoppingToken);
} }
- }
-```
+ ```
-6. Set up the connection string.
+1. Set up the connection string.
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
-> [!NOTE]
-> We recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
+ > [!NOTE]
+ > We recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
-```json
- {
- "ApplicationInsights":
+ ```json
{
- "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;"
- },
- "Logging":
- {
- "LogLevel":
+ "ApplicationInsights":
+ {
+ "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;"
+ },
+ "Logging":
{
- "Default": "Warning"
+ "LogLevel":
+ {
+ "Default": "Warning"
+ }
} }
- }
-```
+ ```
Alternatively, specify the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable.
-Typically `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection string for applications deployed to Web Apps as Web Jobs.
+Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection string for applications deployed to web apps as web jobs.
> [!NOTE] > A connection string specified in code takes precedence over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which takes precedence over other options. ## ASP.NET Core background tasks with hosted services
-[This](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) document describes how to create backgrounds tasks in ASP.NET Core application.
+[This document](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) describes how to create background tasks in an ASP.NET Core application.
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService)
+The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService).
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
-2. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `ConfigureServices()` method, as in this example:
-
-```csharp
- public static async Task Main(string[] args)
- {
- var host = new HostBuilder()
- .ConfigureAppConfiguration((hostContext, config) =>
- {
- config.AddJsonFile("appsettings.json", optional: true);
- })
- .ConfigureServices((hostContext, services) =>
- {
- services.AddLogging();
- services.AddHostedService<TimedHostedService>();
-
- // connection string is read automatically from appsettings.json
- services.AddApplicationInsightsTelemetryWorkerService();
- })
- .UseConsoleLifetime()
- .Build();
+1. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `ConfigureServices()` method, as in this example:
- using (host)
+ ```csharp
+ public static async Task Main(string[] args)
{
- // Start the host
- await host.StartAsync();
-
- // Wait for the host to shutdown
- await host.WaitForShutdownAsync();
- }
- }
-```
-
-Following is the code for `TimedHostedService` where the background task logic resides.
-
-```csharp
- using Microsoft.ApplicationInsights;
- using Microsoft.ApplicationInsights.DataContracts;
-
- public class TimedHostedService : IHostedService, IDisposable
- {
- private readonly ILogger _logger;
- private Timer _timer;
- private TelemetryClient _telemetryClient;
- private static HttpClient httpClient = new HttpClient();
-
- public TimedHostedService(ILogger<TimedHostedService> logger, TelemetryClient tc)
- {
- _logger = logger;
- this._telemetryClient = tc;
+ var host = new HostBuilder()
+ .ConfigureAppConfiguration((hostContext, config) =>
+ {
+ config.AddJsonFile("appsettings.json", optional: true);
+ })
+ .ConfigureServices((hostContext, services) =>
+ {
+ services.AddLogging();
+ services.AddHostedService<TimedHostedService>();
+
+ // connection string is read automatically from appsettings.json
+ services.AddApplicationInsightsTelemetryWorkerService();
+ })
+ .UseConsoleLifetime()
+ .Build();
+
+ using (host)
+ {
+ // Start the host
+ await host.StartAsync();
+
+ // Wait for the host to shutdown
+ await host.WaitForShutdownAsync();
+ }
}
+ ```
- public Task StartAsync(CancellationToken cancellationToken)
- {
- _logger.LogInformation("Timed Background Service is starting.");
-
- _timer = new Timer(DoWork, null, TimeSpan.Zero,
- TimeSpan.FromSeconds(1));
+ The following code is for `TimedHostedService`, where the background task logic resides:
- return Task.CompletedTask;
- }
-
- private void DoWork(object state)
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.DataContracts;
+
+ public class TimedHostedService : IHostedService, IDisposable
{
- _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
-
- using (_telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ private readonly ILogger _logger;
+ private Timer _timer;
+ private TelemetryClient _telemetryClient;
+ private static HttpClient httpClient = new HttpClient();
+
+ public TimedHostedService(ILogger<TimedHostedService> logger, TelemetryClient tc)
+ {
+ _logger = logger;
+ this._telemetryClient = tc;
+ }
+
+ public Task StartAsync(CancellationToken cancellationToken)
+ {
+ _logger.LogInformation("Timed Background Service is starting.");
+
+ _timer = new Timer(DoWork, null, TimeSpan.Zero,
+ TimeSpan.FromSeconds(1));
+
+ return Task.CompletedTask;
+ }
+
+ private void DoWork(object state)
{
- _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
- _logger.LogInformation("Calling bing.com");
- var res = httpClient.GetAsync("https://bing.com").GetAwaiter().GetResult();
- _logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
- _telemetryClient.TrackEvent("Bing call event completed");
+ _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
+
+ using (_telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ {
+ _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
+ _logger.LogInformation("Calling bing.com");
+ var res = httpClient.GetAsync("https://bing.com").GetAwaiter().GetResult();
+ _logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
+ _telemetryClient.TrackEvent("Bing call event completed");
+ }
} }
- }
-```
+ ```
-3. Set up the connection string.
- Use the same `appsettings.json` from the .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example above.
+1. Set up the connection string.
+ Use the same `appsettings.json` from the preceding .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example.
-## .NET Core/.NET Framework Console application
+## .NET Core/.NET Framework console application
-As mentioned in the beginning of this article, the new package can be used to enable Application Insights Telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+As mentioned in the beginning of this article, the new package can be used to enable Application Insights telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), so it can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp)
+The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp).
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
-2. Modify Program.cs as below example.
+1. Modify *Program.cs* as shown in the following example:
-```csharp
- using Microsoft.ApplicationInsights;
- using Microsoft.ApplicationInsights.DataContracts;
- using Microsoft.Extensions.DependencyInjection;
- using Microsoft.Extensions.Logging;
- using System;
- using System.Net.Http;
- using System.Threading.Tasks;
-
- namespace WorkerSDKOnConsole
- {
- class Program
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.DataContracts;
+ using Microsoft.Extensions.DependencyInjection;
+ using Microsoft.Extensions.Logging;
+ using System;
+ using System.Net.Http;
+ using System.Threading.Tasks;
+
+ namespace WorkerSDKOnConsole
{
- static async Task Main(string[] args)
+ class Program
{
- // Create the DI container.
- IServiceCollection services = new ServiceCollection();
-
- // Being a regular console app, there is no appsettings.json or configuration providers enabled by default.
- // Hence instrumentation key/ connection string and any changes to default logging level must be specified here.
- services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information));
- services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here");
-
- // To pass a connection string
- // - aiserviceoptions must be created
- // - set connectionstring on it
- // - pass it to AddApplicationInsightsTelemetryWorkerService()
-
- // Build ServiceProvider.
- IServiceProvider serviceProvider = services.BuildServiceProvider();
-
- // Obtain logger instance from DI.
- ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
-
- // Obtain TelemetryClient instance from DI, for additional manual tracking or to flush.
- var telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
-
- var httpClient = new HttpClient();
-
- while (true) // This app runs indefinitely. replace with actual application termination logic.
+ static async Task Main(string[] args)
{
- logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
-
- // Replace with a name which makes sense for this operation.
- using (telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ // Create the DI container.
+ IServiceCollection services = new ServiceCollection();
+
+ // Being a regular console app, there is no appsettings.json or configuration providers enabled by default.
+ // Hence instrumentation key/ connection string and any changes to default logging level must be specified here.
+ services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information));
+ services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here");
+
+ // To pass a connection string
+ // - aiserviceoptions must be created
+ // - set connectionstring on it
+ // - pass it to AddApplicationInsightsTelemetryWorkerService()
+
+ // Build ServiceProvider.
+ IServiceProvider serviceProvider = services.BuildServiceProvider();
+
+ // Obtain logger instance from DI.
+ ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
+
+ // Obtain TelemetryClient instance from DI, for additional manual tracking or to flush.
+ var telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
+
+ var httpClient = new HttpClient();
+
+ while (true) // This app runs indefinitely. Replace with actual application termination logic.
{
- logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
- logger.LogInformation("Calling bing.com");
- var res = await httpClient.GetAsync("https://bing.com");
- logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
- telemetryClient.TrackEvent("Bing call event completed");
+ logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
+
+ // Replace with a name which makes sense for this operation.
+ using (telemetryClient.StartOperation<RequestTelemetry>("operation"))
+ {
+ logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights");
+ logger.LogInformation("Calling bing.com");
+ var res = await httpClient.GetAsync("https://bing.com");
+ logger.LogInformation("Calling bing completed with status:" + res.StatusCode);
+ telemetryClient.TrackEvent("Bing call event completed");
+ }
+
+ await Task.Delay(1000);
}-
- await Task.Delay(1000);
+
+ // Explicitly call Flush() followed by sleep is required in console apps.
+ // This is to ensure that even if application terminates, telemetry is sent to the back-end.
+ telemetryClient.Flush();
+ Task.Delay(5000).Wait();
}-
- // Explicitly call Flush() followed by sleep is required in Console Apps.
- // This is to ensure that even if application terminates, telemetry is sent to the back-end.
- telemetryClient.Flush();
- Task.Delay(5000).Wait();
} }
- }
-```
+ ```
-This console application also uses the same default `TelemetryConfiguration`, and it can be customized in the same way as the examples in earlier section.
+This console application also uses the same default `TelemetryConfiguration`. It can be customized in the same way as the examples in earlier sections.
## Run your application
-Run your application. The example workers from all of the above makes an http call every second to bing.com, and also emits few logs using `ILogger`. These lines are wrapped inside `StartOperation` call of `TelemetryClient`, which is used to create an operation (in this example `RequestTelemetry` named "operation"). Application Insights will collect these ILogger logs (warning or above by default) and dependencies, and they'll be correlated to the `RequestTelemetry` with parent-child relationship. The correlation also works cross process/network boundary. For example, if the call was made to another monitored component, then it will be correlated to this parent as well.
+Run your application. The workers from all the preceding examples make an HTTP call every second to bing.com and also emit few logs by using `ILogger`. These lines are wrapped inside the `StartOperation` call of `TelemetryClient`, which is used to create an operation. In this example, `RequestTelemetry` is named "operation."
+
+Application Insights collects these ILogger logs, with a severity of Warning or above by default, and dependencies. They're correlated to `RequestTelemetry` with a parent-child relationship. Correlation also works across process/network boundaries. For example, if the call was made to another monitored component, it's correlated to this parent as well.
-This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical Web Application. While it isn't necessary to use an Operation, it fits best with the [Application Insights correlation data model](./correlation.md) - with `RequestTelemetry` acting as the parent operation, and every telemetry generated inside the worker iteration being treated as logically belonging to the same operation. This approach also ensures all the telemetry generated (automatic and manual) will have the same `operation_id`. As sampling is based on `operation_id`, sampling algorithm either keeps or drops all of the telemetry from a single iteration.
+This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical web application. It isn't necessary to use an operation, but it fits best with the [Application Insights correlation data model](./correlation.md). `RequestTelemetry` acts as the parent operation and every telemetry generated inside the worker iteration is treated as logically belonging to the same operation.
-The following lists the full telemetry automatically collected by Application Insights.
+This approach also ensures all the telemetry generated, both automatic and manual, will have the same `operation_id`. Because sampling is based on `operation_id`, the sampling algorithm either keeps or drops all the telemetry from a single iteration.
+
+The following sections list the full telemetry automatically collected by Application Insights.
### Live Metrics
-[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. While it might take a few minutes before telemetry starts appearing in the portal and analytics, Live Metrics would show CPU usage of the running process in near real-time. It can also show other telemetry like Requests, Dependencies, Traces etc.
+[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. Although it might take a few minutes before telemetry appears in the portal and analytics, Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces.
### ILogger logs
-Logs emitted via `ILogger` of severity `Warning` or greater are automatically captured. Follow [ILogger docs](ilogger.md#logging-level) to customize which log levels are captured by Application Insights.
+Logs emitted via `ILogger` with the severity Warning or greater are automatically captured. Follow [ILogger docs](ilogger.md#logging-level) to customize which log levels are captured by Application Insights.
### Dependencies
-Dependency collection is enabled by default. [This](asp-net-dependencies.md#automatically-tracked-dependencies) article explains the dependencies that are automatically collected, and also contain steps to do manual tracking.
+Dependency collection is enabled by default. The article [Dependency tracking in Application Insights](asp-net-dependencies.md#automatically-tracked-dependencies) explains the dependencies that are automatically collected and also contains steps to do manual tracking.
### EventCounter
-`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on customizing the list.
+`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on how to customize the list.
-### Manually tracking other telemetry
+### Manually track other telemetry
-While the SDK automatically collects telemetry as explained above, in most cases user will need to send other telemetry to Application Insights service. The recommended way to track other telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection, and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the Worker examples above.
+Although the SDK automatically collects telemetry as explained, in most cases, you'll need to send other telemetry to Application Insights. The recommended way to track other telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the preceding worker examples.
## Configure the Application Insights SDK
-The default `TelemetryConfiguration` used by the worker service SDK is similar to the automatic configuration used in a ASP.NET or ASP.NET Core application, minus the TelemetryInitializers used to enrich telemetry from `HttpContext`.
+The default `TelemetryConfiguration` used by the Worker Service SDK is similar to the automatic configuration used in an ASP.NET or ASP.NET Core application, minus the telemetry initializers used to enrich telemetry from `HttpContext`.
-You can customize the Application Insights SDK for Worker Service to change the default configuration. Users of the Application Insights ASP.NET Core SDK might be familiar with changing configuration by using ASP.NET Core built-in [dependency injection](/aspnet/core/fundamentals/dependency-injection). The WorkerService SDK is also based on similar principles. Make almost all configuration changes in the `ConfigureServices()` section by calling appropriate methods on `IServiceCollection`, as detailed below.
+You can customize the Application Insights SDK for Worker Service to change the default configuration. Users of the Application Insights ASP.NET Core SDK might be familiar with changing configuration by using ASP.NET Core built-in [dependency injection](/aspnet/core/fundamentals/dependency-injection). The Worker Service SDK is also based on similar principles. Make almost all configuration changes in the `ConfigureServices()` section by calling appropriate methods on `IServiceCollection`, as detailed in the next section.
> [!NOTE]
-> While using this SDK, changing configuration by modifying `TelemetryConfiguration.Active` isn't supported, and changes will not be reflected.
+> When you use this SDK, changing configuration by modifying `TelemetryConfiguration.Active` isn't supported and changes won't be reflected.
-### Using ApplicationInsightsServiceOptions
+### Use ApplicationInsightsServiceOptions
You can modify a few common settings by passing `ApplicationInsightsServiceOptions` to `AddApplicationInsightsTelemetryWorkerService`, as in this example:
public void ConfigureServices(IServiceCollection services)
The `ApplicationInsightsServiceOptions` in this SDK is in the namespace `Microsoft.ApplicationInsights.WorkerService` as opposed to `Microsoft.ApplicationInsights.AspNetCore.Extensions` in the ASP.NET Core SDK.
-Commonly used settings in `ApplicationInsightsServiceOptions`
+The following table lists commonly used settings in `ApplicationInsightsServiceOptions`.
|Setting | Description | Default ||-|-
-|EnableQuickPulseMetricStream | Enable/Disable LiveMetrics feature | true
-|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling | true
-|EnableHeartbeat | Enable/Disable Heartbeats feature, which periodically (15-min default) sends a custom metric named 'HeartBeatState' with information about the runtime like .NET Version, Azure Environment information, if applicable, etc. | true
-|AddAutoCollectedMetricExtractor | Enable/Disable AutoCollectedMetrics extractor, which is a TelemetryProcessor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | true
-|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this setting will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
+|EnableQuickPulseMetricStream | Enable/Disable the Live Metrics feature. | True
+|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling. | True
+|EnableHeartbeat | Enable/Disable the Heartbeats feature, which periodically (15-min default) sends a custom metric named "HeartBeatState" with information about the runtime like .NET version and Azure environment, if applicable. | True
+|AddAutoCollectedMetricExtractor | Enable/Disable the AutoCollectedMetrics extractor, which is a telemetry processor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | True
+|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this setting will cause the following settings to be ignored: `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, and `EnableAppServicesHeartbeatTelemetryModule`. | True
-See the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) for the most up-to-date list.
+For the most up-to-date list, see the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs).
### Sampling
-The Application Insights SDK for Worker Service supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default. Sampling can be disabled by using `EnableAdaptiveSampling` option in [ApplicationInsightsServiceOptions](#using-applicationinsightsserviceoptions)
+The Application Insights SDK for Worker Service supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default. Sampling can be disabled by using the `EnableAdaptiveSampling` option in [ApplicationInsightsServiceOptions](#use-applicationinsightsserviceoptions).
-To configure other sampling settings, the following example can be used.
+To configure other sampling settings, you can use the following example:
```csharp using Microsoft.ApplicationInsights.Extensibility;
public void ConfigureServices(IServiceCollection services)
services.AddApplicationInsightsTelemetryWorkerService(aiOptions); // Add Adaptive Sampling with custom settings.
- // the following adds adaptive sampling with 15 items per sec.
+ // The following adds adaptive sampling with 15 items per sec.
services.Configure<TelemetryConfiguration>((telemetryConfig) => { var builder = telemetryConfig.DefaultTelemetrySink.TelemetryProcessorChainBuilder;
public void ConfigureServices(IServiceCollection services)
} ```
-More information can be found in the [Sampling](#sampling) document.
+For more information, see the [Sampling](#sampling) document.
-### Adding TelemetryInitializers
+### Add telemetry initializers
Use [telemetry initializers](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) when you want to define properties that are sent with all telemetry.
-Add any new `TelemetryInitializer` to the `DependencyInjection` container and SDK will automatically add them to the `TelemetryConfiguration`.
+Add any new telemetry initializer to the `DependencyInjection` container and the SDK automatically adds them to `TelemetryConfiguration`.
```csharp using Microsoft.ApplicationInsights.Extensibility;
Add any new `TelemetryInitializer` to the `DependencyInjection` container and SD
} ```
-### Removing TelemetryInitializers
+### Remove telemetry initializers
Telemetry initializers are present by default. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetryWorkerService()`.
Telemetry initializers are present by default. To remove all or specific telemet
public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetryWorkerService();
- // Remove a specific built-in telemetry initializer
+ // Remove a specific built-in telemetry initializer.
var tiToRemove = services.FirstOrDefault<ServiceDescriptor> (t => t.ImplementationType == typeof(AspNetCoreEnvironmentTelemetryInitializer)); if (tiToRemove != null)
Telemetry initializers are present by default. To remove all or specific telemet
services.Remove(tiToRemove); }
- // Remove all initializers
+ // Remove all initializers.
// This requires importing namespace by using Microsoft.Extensions.DependencyInjection.Extensions; services.RemoveAll(typeof(ITelemetryInitializer)); } ```
-### Adding telemetry processors
+### Add telemetry processors
-You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer) to allow for more direct control over what's included or excluded from the telemetry you send to the Application Insights service. Use the following example.
+You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer) to allow for more direct control over what's included or excluded from the telemetry you send to Application Insights. Use the following example:
```csharp public void ConfigureServices(IServiceCollection services)
You can add custom telemetry processors to `TelemetryConfiguration` by using the
} ```
-### Configuring or removing default TelemetryModules
+### Configure or remove default telemetry modules
Application Insights uses telemetry modules to automatically collect telemetry about specific workloads without requiring manual tracking.
-The following automatic-collection modules are enabled by default. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior.
+The following auto-collection modules are enabled by default. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior.
* `DependencyTrackingTelemetryModule` * `PerformanceCollectorModule` * `QuickPulseTelemetryModule`
-* `AppServicesHeartbeatTelemetryModule` - (There's currently an issue involving this telemetry module. For a temporary workaround, see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689
+* `AppServicesHeartbeatTelemetryModule` (There's currently an issue involving this telemetry module. For a temporary workaround, see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689
).) * `AzureInstanceMetadataTelemetryModule`
-To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example.
+To configure any default telemetry module, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
To configure any default `TelemetryModule`, use the extension method `ConfigureT
} ```
-### Configuring telemetry channel
+### Configure the telemetry channel
-The default channel is `ServerTelemetryChannel`. You can override it as the following example shows.
+The default channel is `ServerTelemetryChannel`. You can override it as the following example shows:
```csharp using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.Channel;
### Disable telemetry dynamically
-If you want to disable telemetry conditionally and dynamically, you may resolve `TelemetryConfiguration` instance with ASP.NET Core dependency injection container anywhere in your code and set `DisableTelemetry` flag on it.
+If you want to disable telemetry conditionally and dynamically, you can resolve the `TelemetryConfiguration` instance with an ASP.NET Core dependency injection container anywhere in your code and set the `DisableTelemetry` flag on it.
```csharp public void ConfigureServices(IServiceCollection services)
If you want to disable telemetry conditionally and dynamically, you may resolve
## Frequently asked questions
+This section provides answers to common questions.
+ ### Which package should I use?
-| .Net Core app scenario | Package |
+| .NET Core app scenario | Package |
||| | Without HostedServices | AspNetCore | | With HostedServices | AspNetCore (not WorkerService) |
If you want to disable telemetry conditionally and dynamically, you may resolve
### Can HostedServices inside a .NET Core app using the AspNetCore package have TelemetryClient injected to it?
-* Yes. The config will be shared with the rest of the web application.
+Yes. The configuration will be shared with the rest of the web application.
### How can I track telemetry that's not automatically collected?
-Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` instances. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
+Get an instance of `TelemetryClient` by using constructor injection and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` instances. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with the rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
### Can I use Visual Studio IDE to onboard Application Insights to a Worker Service project?
-Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Core Applications. This document will be updated when Visual Studio ships support for onboarding Worker service applications.
+Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Core applications. This document will be updated when Visual Studio ships support for onboarding Worker Service applications.
### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
-No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only.
+No. [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only.
### Are all features supported if I run my application in Linux?
Yes. Feature support for this SDK is the same in all platforms, with the followi
* Performance counters are supported only in Windows except for Process CPU/Memory shown in Live Metrics. * Even though `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel:
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
- public void ConfigureServices(IServiceCollection services)
- {
- // The following will configure the channel to use the given folder to temporarily
- // store telemetry items during network or Application Insights server issues.
- // User should ensure that the given folder already exists
- // and that the application has read/write permissions.
- services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
- services.AddApplicationInsightsTelemetryWorkerService();
- }
-```
+ ```csharp
+ using Microsoft.ApplicationInsights.Channel;
+ using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // The following will configure the channel to use the given folder to temporarily
+ // store telemetry items during network or Application Insights server issues.
+ // User should ensure that the given folder already exists
+ // and that the application has read/write permissions.
+ services.AddSingleton(typeof(ITelemetryChannel),
+ new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
+ services.AddApplicationInsightsTelemetryWorkerService();
+ }
+ ```
## Sample applications
-[.NET Core Console Application](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp)
-Use this sample if you're using a Console Application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher)
+[.NET Core console application](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp):
+Use this sample if you're using a console application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher).
-[ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService)
-Use this sample if you are in ASP.NET Core, and creating background tasks as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services)
+[ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService):
+Use this sample if you're in ASP.NET Core and creating background tasks in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services).
-[.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
-Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template)
+[.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService):
+Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template).
## Open-source SDK
-* [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
+[Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
-For the latest updates and bug fixes, [consult the release notes](./release-notes.md).
+For the latest updates and bug fixes, [see the Release Notes](./release-notes.md).
## Next steps * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * [Track more dependencies not automatically tracked](./auto-collect-dependencies.md).
-* [Enrich or Filter auto collected telemetry](./api-filtering-sampling.md).
+* [Enrich or filter auto-collected telemetry](./api-filtering-sampling.md).
* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
There are two types of metric rules used by Container insights based on either P
### Prerequisites
-Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
+Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics with Container insights](container-insights-prometheus-metrics-addon.md).
### Enable alert rules
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
You can let the onboarding experience create a Log Analytics workspace in the de
### Azure Monitor workspace (preview)
-If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
+If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
### Permissions
azure-monitor Container Insights Prometheus Monitoring Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md
# Send Prometheus metrics to Azure Monitor Logs with Container insights
-This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](container-insights-prometheus-metrics-addon.md).
+This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
## Prometheus scraping settings
azure-monitor Container Insights Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md
# Collect Prometheus metrics with Container insights [Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that is typically collected from the cluster by Prometheus without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md).
-Container insights can also scrape Prometheus metrics from your cluster for the cases described below. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram.
+Container insights can also scrape Prometheus metrics from your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram.
-## Collect additional data
-You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace.
-See [Collect Prometheus metrics Logs with Container insights (preview)](container-insights-prometheus-monitoring-addon.md) to configure your cluster to collect additional Prometheus metrics with the monitoring addon.
## Send data to Azure Monitor managed service for Prometheus
-Container insights currently stores the data that it collects in Azure Monitor Logs. [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus.
+[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus.
-See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md) to configure your cluster to send metrics to Azure Monitor managed service for Prometheus.
+> [!TIP]
+> You don't need to enable Container insights to configure your AKS cluster to send data to managed Prometheus. See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on how to configure your cluster without enabling Container insights.
+Use the following procedure to add Promtheus collection to your cluster that's already using Container insights.
+
+1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
+2. Click **Insights**.
+3. Click **Monitor settings**.
+
+ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster.":::
+
+4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace.
+5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafan) for details on creating a Grafana workspace.
+
+ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster.":::
+
+6. Click **Configure** to complete the configuration.
+
+See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations)
+
+## Send metrics to Azure Monitor Logs
+You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace.
+
+### Prometheus scraping settings
+
+Active scraping of metrics from Prometheus is performed from one of two perspectives:
+
+- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
+- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
+
+| Endpoint | Scope | Example |
+|-|-||
+| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
+| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
+| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
+
+When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
+
+|Scope | Key | Data type | Value | Description |
+||--|--|-|-|
+| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
+| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
+| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
+| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
+| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
+| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
+| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
+| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
+| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
+
+### Configure ConfigMaps
+Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
+++
+1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
++
+ #### [Cluster-wide](#tab/cluster-wide)
+
+ To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
+ ```
+
+ #### [Specific URL](#tab/url)
+
+ To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
+ ```
+
+ #### [DaemonSet](#tab/deamonset)
+
+ To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings ΓÇï
+ [prometheus_data_collection_settings.node] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
+ fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
+ fielddrop = ["metric_to_drop"] ΓÇï
+ ```
+
+ `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
+
+ #### [Pod annotation](#tab/pod)
+
+ To configure scraping of Prometheus metrics by specifying a pod annotation:
+
+ 1. In the ConfigMap, specify the following configuration:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h
+ monitor_kubernetes_pods = true
+ ```
+
+ 1. Specify the following configuration for pod annotations:
+
+ ```
+ - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
+ - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
+ - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
+ - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
+ ```
+
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
+
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+
+The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
++
+### Verify configuration
+
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
++
+If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
+
+```
+***************Start Config Processing********************
+config::unsupported/missing config schema version - 'v21' , using defaults
+```
+
+Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
+
+- From an agent pod logs using the same `kubectl logs` command.
+
+- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:
+
+ ```
+ 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
+ ```
+
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
+- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
+
+Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
+
+For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+
+### Query Prometheus metrics data
+
+To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
+
+### View Prometheus metrics in Grafana
+
+Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
+ ## Next steps -- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).-- [Configure your cluster to send data to Azure Monitor Logs](container-insights-prometheus-metrics-addon.md).
+- [See the default configuration for Prometheus metrics](../essentials/prometheus-metrics-scrape-default.md).
+- [Customize Prometheus metric scraping for the cluster](../essentials/prometheus-metrics-scrape-configuration.md).
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
# Sources of monitoring data for Azure Monitor
-Azure Monitor is based on a [common monitoring data platform](data-platform.md) that includes [Logs](logs/data-platform-logs.md) and [Metrics](essentials/data-platform-metrics.md). This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics.
+
+Azure Monitor is based on a [common monitoring data platform](data-platform.md) that includes
+- [Metrics](essentials/data-platform-metrics.md)
+- [Logs](logs/data-platform-logs.md)
+- Traces
+- Changes. This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics.
This article describes common sources of monitoring data collected by Azure Monitor in addition to the monitoring data created by Azure resources. Links are provided to detailed information on configuration required to collect this data to different locations. Some of these data sources use the [new data ingestion pipeline](essentials/data-collection.md) in Azure Monitor. This article will be updated as other data sources transition to this new data collection method. > [!NOTE]
-> Access to data in the Log Analytics Workspaces is governed as outline [here](https://learn.microsoft.com/azure/azure-monitor/logs/manage-access).
+> Access to data in the Log Analytics Workspaces is governed as outline [here](logs/manage-access.md).
> ## Application tiers
The [Azure Activity log](essentials/platform-logs-overview.md) includes service
| Destination | Description | Reference | |:|:|:|
-| Activity log | The Activity log is collected into its own data store that you can view from the Azure Monitor menu or use to create Activity log alerts. | [Query the Activity log in the Azure portal](essentials/activity-log.md#view-the-activity-log) |
+| Activity log | The Activity log is collected into its own data store that you can view from the Azure Monitor menu or use to create Activity log alerts. |[Query the Activity log with the Azure portal](essentials/activity-log.md#view-the-activity-log) |
| Azure Monitor Logs | Configure Azure Monitor Logs to collect the Activity log to analyze it with other monitoring data. | [Collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](essentials/activity-log.md) | | Azure Storage | Export the Activity log to Azure Storage for archiving. | [Archive Activity log](essentials/resource-logs.md#send-to-azure-storage) | | Event Hubs | Stream the Activity log to other locations using Event Hubs | [Stream Activity log to Event Hubs](essentials/resource-logs.md#send-to-azure-event-hubs). |
Compute resources in Azure, in other clouds, and on-premises have a guest operat
| Azure Monitor Logs | The Log Analytics agent connects to Azure Monitor either directly or through System Center Operations Manager and allows you to collect data from data sources that you configure or from monitoring solutions that provide additional insights into applications running on the virtual machine. | [Agent data sources in Azure Monitor](agents/agent-data-sources.md)<br>[Connect Operations Manager to Azure Monitor](agents/om-agents.md) | ### Azure diagnostic extension
-Enabling the Azure diagnostics extension for Azure Virtual machines allows you to collect logs and metrics from the guest operating system of Azure compute resources including Azure Cloud Service (classic) Web and Worker Roles, Virtual Machines, virtual machine scale sets, and Service Fabric.
+Enabling the Azure diagnostics extension for Azure Virtual machines allows you to collect logs and metrics from the guest operating system of Azure compute resources including Azure Cloud Service (classic) Web and Worker Roles, Virtual Machines, Virtual Machine Scale Sets, and Service Fabric.
| Destination | Description | Reference | |:|:|:|
Enabling the Azure diagnostics extension for Azure Virtual machines allows you t
## Application Code
-Detailed application monitoring in Azure Monitor is done with [Application Insights](/azure/application-insights/) which collects data from applications running on a variety of platforms. The application can be running in Azure, another cloud, or on-premises.
+Detailed application monitoring in Azure Monitor is done with [Application Insights](/azure/application-insights/), which collects data from applications running on various platforms. The application can be running in Azure, another cloud, or on-premises.
:::image type="content" source="media/data-sources/applications.png" lightbox="media/data-sources/applications.png" alt-text="Diagram that shows application data collection." border="false":::
In addition to the standard tiers of an application, you may need to monitor oth
## Other services
-Other services in Azure write data to the Azure Monitor data platform. This allows you to analyze data collected by these services with data collected by Azure Monitor and leverage the same analysis and visualization tools.
+Other services in Azure write data to the Azure Monitor data platform. This allows you to analyze data collected by these services with data collected by Azure Monitor and apply the same analysis and visualization tools.
| Service | Destination | Description | Reference | |:|:|:|:|
-| [Microsoft Defender for Cloud](../security-center/index.yml) | Azure Monitor Logs | Microsoft Defender for Cloud stores the security data it collects in a Log Analytics workspace which allows it to be analyzed with other log data collected by Azure Monitor. | [Data collection in Microsoft Defender for Cloud](../security-center/security-center-enable-data-collection.md) |
-| [Microsoft Sentinel](../sentinel/index.yml) | Azure Monitor Logs | Microsoft Sentinel stores the data it collects from different data sources in a Log Analytics workspace which allows it to be analyzed with other log data collected by Azure Monitor. | [Connect data sources](../sentinel/quickstart-onboard.md) |
+| [Microsoft Defender for Cloud](../security-center/index.yml) | Azure Monitor Logs | Microsoft Defender for Cloud stores the security data it collects in a Log Analytics workspace, which allows it to be analyzed with other log data collected by Azure Monitor. | [Data collection in Microsoft Defender for Cloud](../security-center/security-center-enable-data-collection.md) |
+| [Microsoft Sentinel](../sentinel/index.yml) | Azure Monitor Logs | Microsoft Sentinel stores the data it collects from different data sources in a Log Analytics workspace, which allows it to be analyzed with other log data collected by Azure Monitor. | [Connect data sources](../sentinel/quickstart-onboard.md) |
## Next steps
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
-# Tutorial: Monitor Azure resources with Azure Monitor
+# Monitor Azure resources with Azure Monitor
When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. Azure Monitor is a full-stack monitoring service that provides a complete set of features to monitor your Azure resources. You can also use Azure Monitor to monitor resources in other clouds and on-premises.
-In this tutorial, you learn about:
+In this article, you learn about:
> [!div class="checklist"] > * Azure Monitor and how it's integrated into the portal for other Azure services.
In this tutorial, you learn about:
> * Azure Monitor tools that are used to collect and analyze data. > [!NOTE]
-> This tutorial describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
+> This article describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
## Monitoring data
You can access Azure Monitor features from the **Monitor** menu in the Azure por
The **Overview** page includes details about the resource and often its current state. For example, a virtual machine shows its current running state. Many Azure services have a **Monitoring** tab that includes charts for a set of key metrics. Charts are a quick way to view the operation of the resource. You can select any of the charts to open them in Metrics Explorer for more detailed analysis.
-For a tutorial on using Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
+To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
![Screenshot that shows the Overview page.](media/monitor-azure-resource/overview-page.png)
The **Activity log** menu item lets you view entries in the [activity log](../es
The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
-For tutorials on how to create alert rules and view alerts, see [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
+To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
:::image type="content" source="media/monitor-azure-resource/alerts-view.png" lightbox="media/monitor-azure-resource/alerts-view.png" alt-text="Screenshot that shows the Alerts page.":::
For tutorials on how to create alert rules and view alerts, see [Tutorial: Creat
The **Metrics** menu item opens [Metrics Explorer](./metrics-getting-started.md). You can use it to work with individual metrics or combine multiple metrics to identify correlations and trends. This is the same Metrics Explorer that opens when you select one of the charts on the **Overview** page.
-For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
+To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
:::image type="content" source="media/monitor-azure-resource/metrics.png" lightbox="media/monitor-azure-resource/metrics.png" alt-text="Screenshot that shows Metrics Explorer.":::
For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics fo
The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common use is to send them to a Log Analytics workspace so you can analyze them with Log Analytics.
-For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
+To learn how to create a diagnostic setting, see [Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
:::image type="content" source="media/monitor-azure-resource/diagnostic-settings.png" lightbox="media/monitor-azure-resource/diagnostic-settings.png" alt-text="Screenshot that shows the Diagnostic settings page.":::
For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and
The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
-For a list of insights that are available and links to their documentation, see [Insights and core solutions](../monitor-reference.md#insights-and-curated-visualizations).
+For a list of insights that are available and links to their documentation, see [Insights](../insights/insights-overview.md) and [core solutions](../insights/solutions.md).
:::image type="content" source="media/monitor-azure-resource/insights.png" lightbox="media/monitor-azure-resource/insights.png" alt-text="Screenshot that shows the Insights page.":::
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
Versions 9.x and greater of Grafana support Azure Authentication, but it's not e
## Next steps -- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
+- [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md).
- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
+
+ Title: Enable Azure Monitor managed service for Prometheus (preview)
+description: Enable Azure Monitor managed service for Prometheus (preview) and configure data collection from your Azure Kubernetes Service (AKS) cluster.
++ Last updated : 09/28/2022+++
+# Collect Prometheus metrics from AKS cluster (preview)
+This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. You just need to specify the Azure Monitor workspace that the data should be sent to.
+
+> [!NOTE]
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same one used by Container insights. See [Enable Container insights](../containers/container-insights-onboard.md) for different methods to enable Container insights on your cluster. See [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md) for details on adding Prometheus collection to a cluster that already has Container insights enabled.
+
+## Prerequisites
+
+- You must either have an [Azure Monitor workspace](azure-monitor-workspace-overview.md) or [create a new one](azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace).
+- The cluster must use [managed identity authentication](../containers/container-insights-enable-aks.md#migrate-to-managed-identity-authentication).
+- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace.
+ - Microsoft.ContainerService
+ - Microsoft.Insights
+ - Microsoft.AlertsManagement
+
+## Enable Prometheus metric collection
+Use any of the following methods to install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster.
+2. Select **Managed Prometheus** to display a list of AKS clusters.
+3. Click **Configure** next to the cluster you want to enable.
+
+ :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot of Azure Monitor workspace with Prometheus configuration.":::
++
+### [CLI](#tab/cli)
+
+#### Prerequisites
+
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview).
+- Azure CLI version 2.41.0 or higher is required for this feature.
+
+#### Install metrics addon
+
+Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics addon. Following are multiple options depending on the Azure Monitor workspace and Grafana workspace you want to use.
++
+**Create a new default Azure Monitor workspace.**<br>
+If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
+This Azure Monitor Workspace will be in the region specific in [Region mappings](#region-mappings).
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+**Use an existing Azure Monitor workspace.**<br>
+If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana.
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
+```
+
+**Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
+This creates a link between the Azure Monitor workspace and the Grafana workspace.
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+```
+
+The output for each command will look similar to the following:
+
+```json
+"azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metrican'tationsAllowList": "",
+ "metricLabelsAllowlist": ""
+ }
+ }
+}
+```
+
+#### Optional parameters
+Following are optional parameters that you can use with the previous commands.
+
+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional labels provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
+
+**Use annotations and labels.**
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
+```
+
+The output will be similar to the following:
+
+```json
+ "azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metrican'tationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
+ "metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]"
+ }
+ }
+ }
+```
+
+## [Resource Manager](#tab/resource-manager)
+
+### Prerequisites
+
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.
++
+### Retrieve required values for Grafana resource
+From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+ Copy the value of the `principalId` field for the `SystemAssigned` identity.
+
+```json
+"identity": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "type": "SystemAssigned"
+ },
+```
+
+If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+
+```json
+"properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+}
+```
+
+### Assign role to system identity
+The Azure Managed Grafana resource requires the `Monitoring Data Reader` role to read data from the Azure Monitor Workspace.
+
+1. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**.
+2. Select `Monitoring Data Reader`.
+3. Select **Managed identity** and then **Select members**.
+4. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
+5. Click **Select** and then **Review+assign**.
+
+### Download and edit template and parameter file
+
+1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**.
+2. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
+3. Edit the values in the parameter file.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
++
+4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following:
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ ````
+
+In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
+
+The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file.
++
+### Deploy template
+
+Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
++++++
+## Verify Deployment
+
+Run the following command to which verify that the daemon set was deployed properly:
+
+```
+kubectl get ds ama-metrics-node --namespace=kube-system
+```
+
+The output should resemble the following:
+
+```
+User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-metrics-node 1 1 1 1 1 <none> 10h
+```
+
+Run the following command to which verify that the replica set was deployed properly:
+
+```
+kubectl get rs --namespace=kube-system
+```
+
+The output should resemble the following:
+
+```
+User@aksuser:~$kubectl get rs --namespace=kube-system
+NAME DESIRED CURRENT READY AGE
+ama-metrics-5c974985b8 1 1 1 11h
+ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
+```
++
+## Limitations
+
+- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the kuberenetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected.
+- A data collection rule and data collection endpoint is created with the name `MSPROM-\<cluster-name\>-\<cluster-region\>`. These names can't currently be modified.
+- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the Resource Manager template with it, otherwise it will overwrite and remove the existing integrations from the grafana workspace.
+- CPU and Memory requests and limits can't be changed for Container insights metrics addon. If changed, they'll be reconciled and replaced by original values in a few seconds.
+- Metrics addon doesn't work on AKS clusters configured with HTTP proxy.
++
+## Uninstall metrics addon
+Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+
+If you don't already have it, install the aks-preview extension with the following command.
+
+The `aks-preview` extension needs to be installed using the following command. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+
+```azurecli
+az extension add --name aks-preview
+```
+Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster. This doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
+
+```azurecli
+az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+## Region mappings
+When you allow a default Azure Monitor workspace to be created when you install the metrics addon, it's created in the region listed in the following table.
+
+| AKS Cluster region | Azure Monitor workspace region |
+|--||
+|australiacentral |eastus|
+|australiacentral2 |eastus|
+|australiaeast |eastus|
+|australiasoutheast |eastus|
+|brazilsouth |eastus|
+|canadacentral |eastus|
+|canadaeast |eastus|
+|centralus |centralus|
+|centralindia |centralindia|
+|eastasia |westeurope|
+|eastus |eastus|
+|eastus2 |eastus2|
+|francecentral |westeurope|
+|francesouth |westeurope|
+|japaneast |eastus|
+|japanwest |eastus|
+|koreacentral |westeurope|
+|koreasouth |westeurope|
+|northcentralus |eastus|
+|northeurope |westeurope|
+|southafricanorth |westeurope|
+|southafricawest |westeurope|
+|southcentralus |eastus|
+|southeastasia |westeurope|
+|southindia |centralindia|
+|uksouth |westeurope|
+|ukwest |westeurope|
+|westcentralus |eastus|
+|westeurope |westeurope|
+|westindia |centralindia|
+|westus |westus|
+|westus2 |westus2|
+|westus3 |westus|
+|norwayeast |westeurope|
+|norwaywest |westeurope|
+|switzerlandnorth |westeurope|
+|switzerlandwest |westeurope|
+|uaenorth |westeurope|
+|germanywestcentral |westeurope|
+|germanynorth |westeurope|
+|uaecentral |westeurope|
+|eastus2euap |eastus2euap|
+|centraluseuap |westeurope|
+|brazilsoutheast |eastus|
+|jioindiacentral |centralindia|
+|swedencentral |westeurope|
+|swedensouth |westeurope|
+|qatarcentral |westeurope|
+
+## Next steps
+
+- [See the default configuration for Prometheus metrics](prometheus-metrics-scrape-default.md).
+- [Customize Prometheus metric scraping for the cluster](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-multiple-workspaces.md
Routing metrics to more Azure Monitor Workspaces can be done through the creatio
## Send same metrics to multiple Azure Monitor workspaces
-You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](../containers/container-insights-prometheus-metrics-addon.md#enable-prometheus-metric-collection) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana.
+You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana.
- Add the following parameters: ```json
scrape_configs:
## Next steps - [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).-- [Collect Prometheus metrics from AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
+- [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus is a component of [Azure Monitor Me
## Data sources Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources. -- Azure Kubernetes service (AKS). [Configure the Azure Monitor managed service for Prometheus AKS add-on](../containers/container-insights-prometheus-metrics-addon.md) to scrape metrics from an AKS cluster.-- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw). In this configuration, metrics are collected by a local Prometheus server for each cluster and then consolidated in Azure Monitor managed service for Prometheus.
+- Azure Kubernetes service (AKS)
+- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw).
+## Enable
+The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics.
+
+- To collect Prometheus metrics from your AKS cluster without using Container insights, see [Collect Prometheus metrics from AKS cluster (preview)](prometheus-metrics-enable.md).
+- To add collection of Prometheus metrics to your cluster using Container insights, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md#send-data-to-azure-monitor-managed-service-for-prometheus).
+- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity (preview)](prometheus-remote-write-managed-identity.md).
## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana]
## Alerts Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules]
-## Enable
-The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics such as Container insights for your AKS cluster as described in [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](../containers/container-insights-prometheus-metrics-addon.md).
- ## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.
Following are links to Prometheus documentation.
## Next steps
+- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md).
- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
# Customize scraping of Prometheus metrics in Azure Monitor (preview)
-This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](../containers/container-insights-prometheus-metrics-addon.md) in Azure Monitor.
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor.
## Configmaps
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md
# Default Prometheus metrics configuration in Azure Monitor (preview)
-This article lists the default targets, dashboards, and recording rules when you [configure Container insights to collect Prometheus metrics by enabling metrics-addon](../containers/container-insights-prometheus-metrics-addon.md) for any AKS cluster.
+This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an AKS cluster](prometheus-metrics-enable.md) for any AKS cluster.
## Scrape frequency
azure-monitor Tutorial Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-metrics.md
Title: Tutorial - Analyze metrics for an Azure resource
+ Title: Analyze metrics for an Azure resource
description: Learn how to analyze metrics for an Azure resource using metrics explorer in Azure Monitor
Last updated 11/08/2021
-# Tutorial: Analyze metrics for an Azure resource
+# Analyze metrics for an Azure resource
Metrics are numerical values that are automatically collected at regular intervals and describe some aspect of a resource. For example, a metric might tell you the processor utilization of a virtual machine, the free space in a storage account, or the incoming traffic for a virtual network. Metrics explorer is a feature of Azure Monitor in the Azure portal that allows you to create charts from metric values, visually correlate trends, and investigate spikes and dips in metric values. Use the metrics explorer to plot charts from metrics created by your Azure resources and investigate their health and utilization. In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Modify the time range and granularity for the chart
-Following is a video that shows a more extensive scenario than the procedure outlined in this article. If you are new to metrics, we suggest you read through this article first and then view the video to see more specifics.
+Following is a video that shows a more extensive scenario than the procedure outlined in this tutorial. If you are new to metrics, we suggest you read through this article first and then view the video to see more specifics.
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4qO59] ## Prerequisites
-To complete this tutorial you need the following:
+To complete the steps in this tutorial, you need the following:
- An Azure resource to monitor. You can use any resource in your Azure subscription that supports metrics. To determine whether a resource supports metrics, go to its menu in the Azure portal and verify that there's a **Metrics** option in the **Monitoring** section of the menu.
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
Title: Tutorial - Collect resource logs from an Azure resource
-description: Tutorial to configure diagnostic settings to send resource logs from an Azure resource io a Log Analytics workspace where they can be analyzed with a log query.
-
+ Title: Collect resource logs from an Azure resource
+description: Learn how to configure diagnostic settings to send resource logs from an Azure resource io a Log Analytics workspace where they can be analyzed with a log query.
+ Last updated 11/08/2021
-# Tutorial: Collect and analyze resource logs from an Azure resource
+# Collect and analyze resource logs from an Azure resource
Resource logs provide insight into the detailed operation of an Azure resource and are useful for monitoring their health and availability. Azure resources generate resource logs automatically, but you must create a diagnostic setting to collect them. This tutorial takes you through the process of creating a diagnostic setting to send resource logs to a Log Analytics workspace where you can analyze them with log queries. In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Prerequisites
-To complete this tutorial you need the following:
+To complete the steps in this tutorial, you need the following:
- An Azure resource to monitor. You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu.
azure-monitor Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md
+
+ Title: Azure Monitor Insights Overview
+description: Lists available Azure Monitor "Insights" and other Azure product integrations
+++ Last updated : 10/15/2022++
+# Azure Monitor Insights overview
+
+Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*.
+
+The experiences collect and analyze a subset of available telemetry for a given service(s). Depending on the service, the experiences might also provide out-of-the-box alerting. They present the telemetry in a visual layout. The visualizations vary in size and scale.
+
+Some visualizations are considered part of Azure Monitor and follow the support and service level agreements for Azure. They're supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
+
+## Insights and curated visualizations
+
+The following table lists the available curated visualizations and information about them. **Most** of the list below can be found in the [Insights hub in the Azure portal](https://ms.portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/more). The table uses the same grouping as portal.
+
+>[!NOTE]
+> Another type of older visualization called *monitoring solutions* is no longer in active development. The replacement technology is the Azure Monitor Insights, as mentioned here. We suggest you use the Insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](solutions.md).
+
+|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
+|:--|:--|:--|:--|
+|Compute||||
+ | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
+| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
+|Networking||||
+ | [Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |
+|Storage||||
+ | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+| [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+|Databases||||
+| [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
+| [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
+| [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
+ | [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+|Security||||
+ | [Azure Key Vault Insights (preview)](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+|Monitor||||
+ | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
+| [Azure activity Log Insights](../essentials/activity-log-insights.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+ | [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. |
+|Integration||||
+ | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
+ [Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+|Workloads||||
+| [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. |
+| [Azure Monitor for SAP solutions](../../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |
+|Other||||
+ | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
+ | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
+|Not in Azure portal Insight hub||||
+| [Azure Monitor Workbooks for Azure Active Directory](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
+| [Azure HDInsight](../../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.|
+|Analytics||||
+
+## Next steps
+
+- Reference some of the insights listed above to review their functionality
+- Understand [what Azure Monitor can monitor](../monitor-reference.md)
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md
# Monitoring solutions in Azure Monitor > [!CAUTION]
-> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](../monitor-reference.md#insights-and-curated-visualizations).
+> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](insights-overview.md).
Monitoring solutions in Azure Monitor provide analysis of the operation of an Azure application or service. This article gives a brief overview of monitoring solutions in Azure and details on using and installing them.
azure-monitor Azure Data Explorer Monitor Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-proxy.md
- Title: Query data in Azure Monitor using Azure Data Explorer
-description: Use Azure Data Explorer to perform cross product queries between Azure Data Explorer, Log Analytics workspaces and classic Application Insights applications in Azure Monitor.
--- Previously updated : 03/28/2022----
-# Query data in Azure Monitor using Azure Data Explorer
-
-The Azure Data Explorer supports cross service queries between Azure Data Explorer, [Application Insights (AI)](../app/app-insights-overview.md), and [Log Analytics (LA)](./data-platform-logs.md). You can then query your Log Analytics/Application Insights workspace using Azure Data Explorer tools and refer to it in a cross service query. The article shows how to make a cross service query and how to add the Log Analytics/Application Insights workspace to Azure Data Explorer Web UI.
-
-The Azure Data Explorer cross service queries flow:
-
-## Add a Log Analytics/Application Insights workspace to Azure Data Explorer client tools
-
-1. Verify your Azure Data Explorer native cluster (such as *help* cluster) appears on the left menu before you connect to your Log Analytics or Application Insights cluster.
--
- In the Azure Data Explorer UI (https://dataexplorer.azure.com/clusters), select **Add Cluster**.
-
-2. In the **Add Cluster** window, add the URL of the LA or AI cluster.
-
- * For LA: `https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`
- * For AI: `https://adx.monitor.azure.com//subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>`
-
- * Select **Add**.
-
-
->[!NOTE]
->* There are different endpoints for the following:
->* Azure Government- `adx.monitor.azure.us/`
->* Azure China- `adx.monitor.azure.cn/`
->* If you add a connection to more than one Log Analytics/Application insights workspace, give each a different name. Otherwise they'll all have the same name in the left pane.
-
- After the connection is established, your Log Analytics or Application Insights workspace will appear in the left pane with your native Azure Data Explorer cluster.
-
-
-> [!NOTE]
-> The number of Azure Monitor workspaces that can be mapped is limited to 100.
-
-## Create queries using Azure Monitor data
-
-You can run the queries using client tools that support Kusto queries, such as: Kusto Explorer, Azure Data Explorer Web UI, Jupyter Kqlmagic, Flow, PowerQuery, PowerShell, Lens, REST API.
-
-> [!NOTE]
-> The cross service query ability is used for data retrieval only. For more information, see [Function supportability](#function-supportability).
-
-> [!TIP]
-> * Database name should have the same name as the resource specified in the cross service query. Names are case sensitive.
-> * In cross cluster queries, make sure that the naming of Application Insights apps and Log Analytics workspaces is correct.
-> * If names contain special characters, they are replaced by URL encoding in the cross service query.
-> * If names include characters that don't meet [KQL identifier name rules](/azure/data-explorer/kusto/query/schema-entities/entity-names), they are replaced by the dash **-** character.
-
-### Direct query on your Log Analytics or Application Insights workspaces from Azure Data Explorer client tools
-
-Run queries on your Log Analytics or Application Insights workspaces. Verify that your workspace is selected in the left pane.
-
-```kusto
-Perf | take 10 // Demonstrate cross service query on the Log Analytics workspace
-```
--
-### Cross query of your Log Analytics or Application Insights and the Azure Data Explorer native cluster
-
-When you run cross cluster service queries, verify your Azure Data Explorer native cluster is selected in the left pane. The following examples demonstrate combining Azure Data Explorer cluster tables [using union](/azure/data-explorer/kusto/query/unionoperator) with Log Analytics workspace.
-
-```kusto
-union StormEvents, cluster('https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>').database('<workspace-name>').Perf
-| take 10
-```
-
-```kusto
-let CL1 = 'https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>';
-union <Azure Data Explorer table>, cluster(CL1).database(<workspace-name>).<table name>
-```
--
->[!TIP]
->* Using the [`join` operator](/azure/data-explorer/kusto/query/joinoperator), instead of union, may require a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to run it on an Azure Data Explorer native cluster.
-
-### Join data from an Azure Data Explorer cluster in one tenant with an Azure Monitor resource in another
-
-Cross-tenant queries between the services are not supported. You are signed in to a single tenant for running the query spanning both resources.
-
-If the Azure Data Explorer resource is in Tenant 'A' and Log Analytics workspace is in Tenant 'B' use one of the following two methods:
-
-1. Azure Data Explorer allows you to add roles for principals in different tenants. Add your user ID in Tenant 'B' as an authorized user on the Azure Data Explorer cluster. Validate the *['TrustedExternalTenant'](/powershell/module/az.kusto/update-azkustocluster)* property on the Azure Data Explorer cluster contains Tenant 'B'. Run the cross-query fully in Tenant 'B'.
-
-2. Use [Lighthouse](../../lighthouse/index.yml) to project the Azure Monitor resource into Tenant 'A'.
-### Connect to Azure Data Explorer clusters from different tenants
-
-Kusto Explorer automatically signs you into the tenant to which the user account originally belongs. To access resources in other tenants with the same user account, the `tenantId` has to be explicitly specified in the connection string:
-`Data Source=https://adx.monitor.azure.com/subscriptions/SubscriptionId/resourcegroups/ResourceGroupName;Initial Catalog=NetDefaultDB;AAD Federated Security=True;Authority ID=`**TenantId**
-
-## Function supportability
-
-The Azure Data Explorer cross service queries support functions for both Application Insights and Log Analytics.
-This capability enables cross-cluster queries to reference an Azure Monitor tabular function directly.
-The following commands are supported with the cross service query:
-
-* `.show functions`
-* `.show function {FunctionName}`
-* `.show database {DatabaseName} schema as json`
-
-The following image depicts an example of querying a tabular function from the Azure Data Explorer Web UI.
-To use the function, run the name in the Query window.
--
-## Additional syntax examples
-
-The following syntax options are available when calling the Log Analytics or Application Insights clusters:
-
-|Syntax Description |Application Insights |Log Analytics |
-|-|||
-| Database within a cluster that contains only the defined resource in this subscription (**recommended for cross cluster queries**) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>').database('<ai-app-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>').database('<workspace-name>`) |
-| Cluster that contains all apps/workspaces in this subscription | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>`) |
-|Cluster that contains all apps/workspaces in the subscription and are members of this resource group | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>`) |
-|Cluster that contains only the defined resource in this subscription | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`) |
-|For Endpoints in the UsGov | cluster(`https://adx.monitor.azure.us/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`)|
- |For Endpoints in the China 21Vianet | cluster(`https://adx.monitor.azure.us/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`) |
-
-## Next steps
--- Read more about the [data structure of Log Analytics workspaces and Application Insights](data-platform-logs.md).-- Learn to [write queries in Azure Data Explorer](/azure/data-explorer/write-queries).--
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. | | **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.|
-| **Get insights** | Logs support [insights](../monitor-reference.md#insights-and-curated-visualizations) that provide a customized monitoring experience for particular applications and services. |
+| **Get insights** | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. |
| **Retrieve** | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> | | **Export** | Configure [automated export of log data](./logs-data-export.md) to an Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). |
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
Previously updated : 04/05/2022 Last updated : 09/08/2022
This article is a reference of the different applications and services that are monitored by Azure Monitor.
-## Insights and curated visualizations
+Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider.
-Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*.
+For a list of Azure resource provider namespaces, see [Resource providers for Azure services](/azure/azure-resource-manager/management/azure-services-resource-providers).
-The experiences collect and analyze a subset of logs and metrics. Depending on the service, they might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale.
+For a list of resource providers that support Azure Monitor
-Some visualizations are considered part of Azure Monitor and follow the support and service level agreements for Azure. They're supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
+- **Metrics** - See [Supported metrics in Azure Monitor](essentials/metrics-supported.md).
+- **Metric alerts** - See [Supported resources for metric alerts in Azure Monitor](/alerts/alerts-metric-near-real-time.md).
+- **Prometheus metrics** - See [TBD](essentials/FILL ME IN.md).
+- **Resource logs** - See [Supported categories for Azure Monitor resource logs](/essentials/resource-logs-categories.md).
+- **Activity log** - All entries in the activity log are available for query, alerting and routing to Azure Monitor Logs store regardless of resource provider.
-The following table lists the available curated visualizations and information about them.
+## Services that require agents
->[!NOTE]
-> Another type of older visualization called *monitoring solutions* is no longer in active development. The replacement technology is the Azure Monitor Insights, as mentioned. We suggest you use the Insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](./insights/solutions.md).
+Azure Monitor can't see inside a service running its own application, operating system or container. That type of service requires one or more agents to be installed. The agent then runs as well to collect metrics, logs, traces and changes and forward them to Azure Monitor. The following services require agents for this reason.
-|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
-|:--|:--|:--|:--|
-| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
-| [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
-| [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
-| [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
-| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
-| [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
-| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.|
- | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
- | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
- | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
- | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
- | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. |
- | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
- | [Azure Network Insights](../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by simply searching for your website name. |
- | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. |
- | [Azure Monitor SAP](../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |
- | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
- | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
- | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
+- [Azure Cloud Services](../cloud-services-extended-support/index.yml)
+- [Azure Virtual Machines](../virtual-machines/index.yml)
+- [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)
+- [Azure Service Fabric](../service-fabric/index.yml)
+
+In addition, applications also require either the Application Insights SDK or auto-instrumentation (via an agent) to collect information and write it to the Azure Monitor data platform.
+
+## Services with Insights
+
+Some services have curated monitoring experiences call "insights". Insights are meant to be a starting point for monitoring a service or set of services. Some insights may also automatically pull additional data that's not captured or stored in Azure Monitor. For more information on monitoring insights, see [Insights Overview](insights/insights-overview.md).
## Product integrations
-The other services and older monitoring solutions in the following table store their data in a Log Analytics workspace so that it can be analyzed with other log data collected by Azure Monitor.
+The services and [older monitoring solutions](insights/solutions.md) in the following table store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor.
| Product/Service | Description | |:|:|
The other services and older monitoring solutions in the following table store t
| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. | | [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](app/mobile-center-quickstart.md). | | Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |
-| **The following solutions also integrate with parts of Azure Monitor. Note that solutions are no longer under active development. Use [Insights](#insights-and-curated-visualizations) instead.** | |
+| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, which are based on Azure Monitor Logs and Log Analytics, are no longer under active development. Use [Insights](insights/insights-overview.md) instead.** | |
| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) | | Network - [Azure Application Gateway solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | . | [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. |
Azure Monitor can collect data from resources outside of Azure by using the meth
| Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). | | REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. |
-## Azure supported services
-
-The following table lists Azure services and the data they collect into Azure Monitor.
--- **Metrics**: The service automatically collects metrics into Azure Monitor Metrics.-- **Logs**: The service supports diagnostic settings that can send metrics and platform logs into Azure Monitor Logs for analysis in Log Analytics.-- **Insight**: An insight is available that provides a customized monitoring experience for the service.-
-| Service | Resource provider namespace | Has metrics | Has logs | Insight | Notes
-|||-|--|-|--|
- | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftaaddomainservices) | | |
- | [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
- | [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftanalysisservicesservers) | | |
- | [Azure API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](./essentials/resource-logs-categories.md#microsoftapimanagementservice) | | |
- | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappconfigurationconfigurationstores) | | |
- | [Azure Spring Apps](../spring-apps/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | |
- | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | |
- | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftautomationautomationaccounts) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbatchbatchaccounts) | | |
- | [Azure Batch](../batch/index.yml) | Microsoft.BatchAI/workspaces | No | No | | |
- | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
- | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
- | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | |
- | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
- | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | |
- | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/domainNames/slots/roles | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
- | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) | No | | |
- | [Azure Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftclassicnetworknetworksecuritygroups) | | |
- | [Azure Storage (Classic)](../storage/index.yml) | Microsoft.ClassicStorage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Blob Storage (Classic)](../storage/blobs/index.yml) | Microsoft.ClassicStorage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Queue Storage (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Table Storage (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcognitiveservicesaccounts) | | |
- | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.|
- | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
- | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | |
- | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
- | [Azure Custom Providers](../azure-resource-manager/custom-providers/index.yml) | Microsoft.CustomProviders/resourceProviders | [**Yes**](./essentials/metrics-supported.md#microsoftcustomprovidersresourceproviders) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcustomprovidersresourceproviders) | | |
- | [Microsoft Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | Microsoft.D365CustomerInsights/instances | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftd365customerinsightsinstances) | | |
- | [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) | Microsoft.DataBoxEdge/DataBoxEdgeDevices | [**Yes**](./essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | No | | |
- | [Azure Databricks](/azure/azure-databricks/) | Microsoft.Databricks/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatabricksworkspaces) | | |
- | Project CI | Microsoft.DataCollaboration/workspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/dataFactories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactorydatafactories) | No | | |
- | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/factories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactoryfactories) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatafactoryfactories) | | |
- | [Azure Data Lake Analytics](../data-lake-analytics/index.yml) | Microsoft.DataLakeAnalytics/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakeanalyticsaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakeanalyticsaccounts) | | |
- | [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) | Microsoft.DataLakeStore/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakestoreaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakestoreaccounts) | | |
- | [Azure Data Share](../data-share/index.yml) | Microsoft.DataShare/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatashareaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatashareaccounts) | | |
- | [Azure Database for MariaDB](../mariadb/index.yml) | Microsoft.DBforMariaDB/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformariadbservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformariadbservers) | | |
- | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlflexibleservers) | | |
- | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlservers) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlflexibleservers) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservers) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | |
- | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationapplicationgroups) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
- | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/hostpools | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationhostpools) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
- | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationworkspaces) | | |
- | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpools) | No | | |
- | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools/IotHubTenants | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpoolsiothubtenants) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdeviceselasticpoolsiothubtenants) | | |
- | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/IotHubs | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesiothubs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesiothubs) | | |
- | [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml) | Microsoft.Devices/ProvisioningServices | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesprovisioningservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices) | | |
- | [Azure Digital Twins](../digital-twins/overview.md) | Microsoft.DigitalTwins/digitalTwinsInstances | [**Yes**](./essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdigitaltwinsdigitaltwinsinstances) | | |
- | [Azure Cosmos DB](../cosmos-db/index.yml) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/domains | [**Yes**](./essentials/metrics-supported.md#microsofteventgriddomains) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgriddomains) | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/eventSubscriptions | [**Yes**](./essentials/metrics-supported.md#microsofteventgrideventsubscriptions) | No | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/extensionTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridextensiontopics) | No | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerNamespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnernamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnernamespaces) | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnertopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnertopics) | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/systemTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridsystemtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridsystemtopics) | | |
- | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/topics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridtopics) | | |
- | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/clusters | [**Yes**](./essentials/metrics-supported.md#microsofteventhubclusters) | No | 0 | |
- | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/namespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventhubnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventhubnamespaces) | 0 | |
- | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | |
- | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | |
- | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | |
- | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | |
- | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
- | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
- | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | |
- | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | |
- | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | |
- | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/workflows | [**Yes**](./essentials/metrics-supported.md#microsoftlogicworkflows) | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicworkflows) | | |
- | [Azure Machine Learning](../machine-learning/index.yml) | Microsoft.MachineLearningServices/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftmachinelearningservicesworkspaces) | | |
- | [Azure Maps](../azure-maps/index.yml) | Microsoft.Maps/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservices) | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi) | | |
- | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | |
- | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | |
- | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | |
- | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | |
- | [Azure Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | |
- | [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | |
- | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | |
- | [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | |
- | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | |
- | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) | No | | |
- | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRoutePorts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressrouteports) | No | | |
- | [Azure Front Door](../frontdoor/index.yml) | Microsoft.Network/frontdoors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkfrontdoors) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkfrontdoors) | | |
- | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/loadBalancers | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkloadbalancers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkloadbalancers) | | |
- | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/natGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknatgateways) | No | | |
- | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkInterfaces | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkinterfaces) | No | [Azure Network Insights](../network-watcher/network-insights-overview.md) | |
- | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworknetworksecuritygroups) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | |
- | [Azure Network Watcher](../network-watcher/network-watcher-monitoring-overview.md) | Microsoft.Network/networkWatchers/connectionMonitors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkwatchersconnectionmonitors) | No | | |
- | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/p2sVpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkp2svpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkp2svpngateways) | | |
- | [Azure DNS Private Zones](../dns/private-dns-privatednszone.md) | Microsoft.Network/privateDnsZones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatednszones) | No | | |
- | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateEndpoints | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivateendpoints) | No | | |
- | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateLinkServices | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) | No | | |
- | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/publicIPAddresses | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | |
- | [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](./essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworktrafficmanagerprofiles) | | |
- | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/virtualNetworkGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworkgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworkgateways) | | |
- | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualNetworks | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworks) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | |
- | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualRouters | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualrouters) | No | | |
- | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/vpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvpngateways) | | |
- | [Azure Notification Hubs](../notification-hubs/index.yml) | Microsoft.NotificationHubs/namespaces/notificationHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnotificationhubsnamespacesnotificationhubs) | No | | |
- | [Azure Monitor](./index.yml) | Microsoft.OperationalInsights/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftoperationalinsightsworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftoperationalinsightsworkspaces) | | |
- | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peerings | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeerings) | No | | |
- | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peeringServices | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeeringservices) | No | | |
- | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenants) | | |
- | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | |
- | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | |
- | [Microsoft Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
- | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
- | [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | |
- | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | |
- | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | |
- | [Azure Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
- | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | |
- | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | |
- | [Azure SQL Managed Instance](/azure/azure-sql/database/monitoring-tuning-index) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
- | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
- | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
- | [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Blob Storage](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Queue Storage](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Table Storage](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure HPC Cache](../hpc-cache/index.yml) | Microsoft.StorageCache/caches | [**Yes**](./essentials/metrics-supported.md#microsoftstoragecachecaches) | No | | |
- | [Azure Storage](../storage/index.yml) | Microsoft.StorageSync/storageSyncServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Stream Analytics](../stream-analytics/index.yml) | Microsoft.StreamAnalytics/streamingjobs | [**Yes**](./essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstreamanalyticsstreamingjobs) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspaces) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/bigDataPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacesbigdatapools) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacessqlpools) | | |
- | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironments) | | |
- | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironmentseventsources) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md) | No | | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/connections | [**Yes**](./essentials/metrics-supported.md#microsoftwebconnections) | No | | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/multiRolePools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/workerPools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/serverFarms | [**Yes**](./essentials/metrics-supported.md#microsoftwebserverfarms) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites | [**Yes**](./essentials/metrics-supported.md#microsoftwebsites) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsites) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites/slots | [**Yes**](./essentials/metrics-supported.md#microsoftwebsitesslots) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsitesslots) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/staticSites | [**Yes**](./essentials/metrics-supported.md#microsoftwebstaticsites) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- ## Next steps - Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Last updated 09/01/2022 - # Azure Monitor overview+ Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This information helps you understand how your applications are performing and proactively identify issues that affect them and the resources they depend on.
A few examples of what you can do with Azure Monitor include:
## Overview The following diagram gives a high-level view of Azure Monitor. -- At the center of the diagram are the data stores for metrics and logs and changes, which are the fundamental types of data used by Azure Monitor. -- On the left are the [sources of monitoring data](data-sources.md) that populate these [data stores](data-platform.md). -- On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and integration such as streaming to external systems.
+- The stores for the **[data platform](data-platform.md)** are at the center of the diagram. Azure Monitor stores these fundamental types of data: metrics, logs, traces, and changes.
+- The **[sources of monitoring data](data-sources.md)** that populate these data stores are on the left.
+- The different functions that Azure Monitor performs with this collected data are on the right. This includes such actions as analysis, alerting.
+- At the bottom is a layer of integration pieces. These are actually integrated throughout other parts of the diagram, but that is too complex to show visually.
:::image type="content" source="media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg" alt-text="Diagram that shows an overview of Azure Monitor." border="false" lightbox="media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg":::
-The following video uses an earlier version of the preceding diagram, but its explanations are still relevant.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL]
-- ## Observability and the Azure Monitor data platform Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. Observability can be achieved by aggregating and correlating these different types of data across the entire system being monitored.
-Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner.
+Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. It's important for you to understand the differences between features such as data analysis, visualizations, or alerting, so that you can implement your required scenario in the most efficient and cost effective manner.
| Pillar | Description | |:|:|
-| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). |
-| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). |
+| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. Metrics are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). |
+| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs in the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). |
| Distributed traces | Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.<br><br>Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights and stored in Azure Monitor Logs. For more information, see [What is Distributed Tracing?](app/distributed-tracing.md). | | Changes | Changes are tracked using [Change Analysis](change/change-analysis.md). Changes are a series of events that occur in your Azure application and resources. Change Analysis is a subscription-level observability tool that's built on the power of Azure Resource Graph. <br><br> Once Change Analysis is enabled, the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription. Change Analysis' integrations with Monitoring and Diagnostics tools provide data to help users understand what changes might have caused the issues. Read more about Change Analysis in [Use Change Analysis in Azure Monitor](./change/change-analysis.md). | Azure Monitor aggregates and correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools. - > [!NOTE] > It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
Change Analysis alerts you to live site issues, outages, component failures, or
Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data.
-## What data does Azure Monitor collect?
+## What data can Azure Monitor collect?
Azure Monitor can collect data from [sources](monitor-reference.md) that range from your application to any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers: - **Application** - Data about the performance and functionality of the code you've written, regardless of its platform.
+- **Container** - Data about containers and applications running inside containers, such as Azure Kubernetes.
- **Guest operating system** - Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.-- **Azure resource** - Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services).
+- **Azure resource** - Data about the operation of an Azure resource. For a list of the resources that have metrics and/or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md).
- **Azure subscription** - Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself. - **Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory. - **Azure resource changes** - Data about changes within your Azure resources and how to address and triage incidents and issues.
Azure Monitor can collect log data from any REST client by using the [Data Colle
Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization," which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger, scalable, curated visualizations are known as "insights" and marked with that name in the documentation and the Azure portal.
-For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are described here.
+For more information, see [List of insights and curated visualizations using Azure Monitor](insights/insights-overview.md). Some of the larger insights are described here.
### Application Insights
You'll often have the requirement to integrate Azure Monitor with other systems
### API
-Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.
---
+Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.
## Next steps Learn more about:- * [Metrics and logs](./data-platform.md#metrics) for the data collected by Azure Monitor. * [Data sources](data-sources.md) for how the different components of your application send telemetry. * [Log queries](logs/log-query-overview.md) for analyzing collected data.
-* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services.
+* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services.
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
Operations Management Suite (OMS) was a bundling of the following Azure manageme
- Log Analytics - Site Recovery
-[New pricing has been introduced for these services](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/), and the OMS bundling is no longer available for new customers. None of the services that were part of OMS have changed, except for the consolidation into Azure Monitor described above.
---
+[New pricing has been introduced for these services](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/), and the OMS bundling is no longer available for new customers. None of the services that were part of OMS have changed, except for the consolidation into Azure Monitor described above. The OMS portal was retired and is no longer available.
## Next steps - Read an [overview of Azure Monitor](overview.md) that describes its different components and features.-- Learn about the [transition of the OMS portal](./logs/oms-portal-transition.md).
azure-monitor Tutorial Monitor Vm Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert.md
Title: Tutorial - Alert when Azure virtual is down
+ Title: Alert when Azure virtual is down
description: Create an alert rule in Azure Monitor to proactively notify you if a virtual machine is unavailable.
Last updated 11/04/2021
-# Tutorial: Create alert when Azure virtual machine is unavailable
+# Create alert when Azure virtual machine is unavailable
One of the most common alerting conditions for a virtual machine is whether the virtual machine is running. Once you enable monitoring with VM insights in Azure Monitor for the virtual machine, a heartbeat is sent to Azure Monitor every minute. You can create a log query alert rule that sends an alert if a heartbeat isn't detected. This method not only alerts if the virtual machine isn't running, but also if it's not responsive.
-In this tutorial, you learn how to:
+In this article, you learn how to:
> [!div class="checklist"] > * View log data collected by VM insights in Azure Monitor for a virtual machine. > * Create an alert rule from log data that will proactively notify you if the virtual machine is unavailable. ## Prerequisites
-To complete this tutorial you need the following:
+To complete the steps in this article you need the following:
- An Azure virtual machine to monitor. - Monitoring with VM insights enabled for the virtual machine. See [Tutorial: Enable monitoring for Azure virtual machine](tutorial-monitor-vm-enable.md).
azure-monitor Tutorial Monitor Vm Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable.md
Title: Tutorial - Enable monitoring for Azure virtual machine
+ Title: Enable monitoring for Azure virtual machine
description: Enable monitoring with VM insights in Azure Monitor to monitor an Azure virtual machine.
Last updated 11/04/2021
-# Tutorial: Enable monitoring for Azure virtual machine
-To monitor the health and performance of an Azure virtual machine, you need to install an agent to collect data from its guest operating system. VM insights is a feature of Azure Monitor for monitoring the guest operating system and workloads running on Azure virtual machines. When you enable monitoring for an Azure virtual machine, it installs the necessary agents and starts collecting performance, process, and dependency information from the guest operating system.
+# Enable monitoring for Azure virtual machine
+To monitor the health and performance of an Azure virtual machine, you need to install an agent to collect data from its guest operating system. VM insights monitors the guest operating system and workloads running on Azure virtual machines. When you enable monitoring for an Azure virtual machine, it installs the necessary agents and starts collecting performance, process, and dependency information from the guest operating system.
> [!NOTE] > If you're completely new to Azure Monitor, you should start with [Tutorial: Monitor Azure resources with Azure Monitor](../essentials/monitor-azure-resource.md). Azure virtual machines generate similar monitoring data as other Azure resources such as platform metrics and Activity log. This tutorial describes how to enable additional monitoring unique to virtual machines.
In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a Log Analytics workspace to collect performance and log data from the virtual machine. > * Enable VM insights for the virtual machine which installs the required agents and begins data collection.
-> * Inspect graphs analyzing performance data collected form the virtual machine.
+> * Inspect graphs analyzing performance data collected from the virtual machine.
> * Inspect map showing processes running on the virtual machine and dependencies with other systems.
In this tutorial, you learn how to:
> VM insights installs the Log Analytics agent which collects performance data from the guest operating system of virtual machines. It doesn't collect logs from the guest operating system and doesn't send performance data to Azure Monitor Metrics. For this functionality, see [Tutorial: Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md). ## Prerequisites
-To complete this tutorial you need the following:
+To complete this tutorial, you need the following:
- An Azure virtual machine to monitor.
To complete this tutorial you need the following:
## Enable monitoring
-Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights hasn't yet been enabled for it, you should see a screen similar to the following allowing you to enable monitoring. Click **Enable**.
+Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights hasn't been enabled, you should see a screen similar to the following allowing you to enable monitoring. Click **Enable**.
> [!NOTE] > If you selected the option to **Enable detailed monitoring** when you created your virtual machine, VM insights may already be enabled. Select your workspace and click **Enable** again. This is the workspace where data collected by VM insights will be sent.
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
Title: Tutorial - Collect guest logs and metrics from Azure virtual machine
+ Title: Collect guest logs and metrics from Azure virtual machine
description: Create data collection rule to collect guest logs and metrics from Azure virtual machine.
Last updated 11/08/2021
-# Tutorial: Collect guest logs and metrics from Azure virtual machine
+# Collect guest logs and metrics from Azure virtual machine
When you [enable monitoring with VM insights](tutorial-monitor-vm-enable.md), it collects performance data using the Log Analytics agent. To collect logs from the guest operating system and to send performance data to Azure Monitor Metrics, install the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule](../essentials/data-collection-rule-overview.md) (DCR) that defines the data to collect and where to send it. > [!NOTE]
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
This is a list of technical articles where AzAcSnap has been used as part of a d
* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408) * [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347) * [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
+* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620)
## Command synopsis
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 10/31/2022 Last updated : 11/01/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Azure NetApp Files supports the use of [Active Directory integrated DNS](/window
Ensure that you meet the following requirements about the DNS configurations: * If you're using standalone DNS servers:
-* Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
+ * Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
* Ensure that network ports UDP 53 and TCP 53 are not blocked by firewalls or NSGs. * Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers. * Ensure that the PTR records for the SRV records registered by the AD DS Net Logon service have been created on the DNS servers.
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 08/16/2022 Last updated : 10/28/2022 # Move networking resources to new resource group or subscription
If you want to move networking resources to a new region, see [Tutorial: Move Az
## Dependent resources > [!NOTE]
-> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address must be disassociated from the public IP address before moving across subscriptions.
+> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address can't be moved across subscriptions. For virtual machines, you can [disassociate the public IP address](../../../virtual-network/ip-services/remove-public-ip-address-vm.md) before moving across subscriptions.
When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 08/29/2022 Last updated : 10/28/2022 # Move operation support for resources
Jump to a resource provider namespace:
> | privateendpointredirectmaps | No | No | No | > | privateendpoints | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No | > | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes | Yes | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
+> | publicipaddresses | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
> | publicipprefixes | Yes | Yes | No | > | routefilters | No | No | No | > | routetables | Yes | Yes | No |
Jump to a resource provider namespace:
> | trafficmanagerprofiles / heatmaps | No | No | No | > | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No |
-> | virtualnetworkgateways | Yes | Yes | No |
+> | virtualnetworkgateways | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No |
> | virtualnetworks | Yes | Yes | No | > | virtualnetworktaps | No | No | No | > | virtualrouters | Yes | Yes | No |
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
+
+ Title: Indexing configuration guide
+description: This article explains the configuration options of indexing process with Azure Video Indexer.
+ Last updated : 11/01/2022++++
+# The indexing configuration guide
+
+It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance.
+
+This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
+
+The initial upload screen presents options to define the video name, source language, and privacy settings.
++
+All the other setting options appear if you select Advanced options.
++
+## Default settings
+
+By default, Azure Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate.
+
+> [!TIP]
+> This topic describes each indexing option in detail.
+
+Below are a few examples of when using the default setting might not be a good fit:
+
+- If you need insights observed people or matched person that is only available through Advanced Video.
+- If you're only using Azure Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice.
+- If you're consuming Azure Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost.
+- If a video is primarily in a language that isn't English.
+
+### Video source language
+
+If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language.
+
+If the video may contain multiple languages and you aren't sure which ones, select **Auto-detect multi-language**. In this case, multi-language (MLID) detection will be applied when uploading and indexing your video.
+
+While auto-detect is a great option when the language in your videos varies, there are two points to consider when using LID or MLID:
+
+- LID/MLID don't support all the languages supported by Azure Video Indexer.
+- The transcription is of a higher quality when you pre-select the videoΓÇÖs appropriate language.
+
+Learn more about [language support and supported languages](language-support.md).
+
+### Privacy
+
+This option allows you to determine if the insights should only be accessible to users in your Azure Video Indexer account or to anyone with a link.
+
+### Indexing options
+
+When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details.
+
+Below are the indexing type options with details of their insights provided. To modify the indexing type, select **Advanced settings**.
+
+|Audio only|Video only |Audio & Video |
+||||
+|Basic |||
+|Standard| Standard |Standard |
+|Advanced |Advanced|Advanced |
+
+## Advanced settings
+
+### Audio only
+
+- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, named entities (brands, locations, people), and topics.
+- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and topics.
+- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and articles.
+
+### Video only
+
+- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), and topics (OCR).
+- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), and topics (OCR).
+
+### Audio and Video
+
+- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), OCR, people, sentiments, speakers, and topics.
+- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, matched person (preview), named entities (brands, locations, people), OCR, observed people (preview), people, sentiments, speakers, and topics.
+
+### Streaming quality options
+
+When indexing a video, you can decide if encoding of the file should occur which will enable streaming. The sequence is as follows:
+
+Upload > Encode (optional) > Index & Analysis > Publish for streaming (optional)
+
+Encoding and streaming operations are performed by and billed by Azure Media Services. There are two additional operations associated with the creation of a streaming video:
+
+- The creation of a Streaming Endpoint.
+- Egress traffic ΓÇô the volume depends on the number of video playbacks, video playback length, and the video quality (bitrate).
+
+There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure Video Indexer.
+
+For Media Services encoding pricing details, see [pricing](https://azure.microsoft.com/pricing/details/media-services/#pricing).
+
+When indexing a video, default streaming settings are applied. Below are the streaming type options that can be modified if you, select **Advanced** settings and go to **Streaming quality**.
+
+|Single bitrate|Adaptive bitrate| No streaming |
+||||
+
+- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding.
+- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video.
+
+ **Example**: If you index a video in the US East region that is 40 minutes in length and is 720p HP and have selected the streaming option of Adaptive Bitrate, 3 outputs will be created - 1 HD (multiplied by 2), 1 SD (multiplied by 1) and 1 audio track (multiplied by 0.25). This will total to (2+1+0.25) * 40 = 130 billable output minutes.
+
+ Output minutes (standard encoder): 130 x $0.015/minute = $1.95.
+- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure Video Indexer website. When No streaming is selected, you aren't billed for encoding.
+
+### Customizing content models - People/Animated characters and Brand categories
+
+Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include animated characters, brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing.
+
+## Next steps
+
+Learn more about [language support and supported languages](language-support.md).
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
This section describes languages supported by Azure Video Indexer API.
- Frame patterns (Only to Hebrew as of now) - Language customization
-| **Language** | **Code** | **Transcription** | **LID**\* | **MLID**\* | **Translation** | **Customization** (language model) |
-|::|:--:|:--:|:-:|:-:|:-:|::|
+| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (language model) |
+|::|:--:|:--:|:--:|:--:|:-:|::|
| Afrikaans | `af-ZA` | | | | | Γ£ö | | Arabic (Israel) | `ar-IL` | Γ£ö | | | | Γ£ö | | Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
This section describes languages supported by Azure Video Indexer API.
| Bulgarian | `bg-BG` | | | | Γ£ö | | | Catalan | `ca-ES` | | | | Γ£ö | | | Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*| | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| | Γ£ö | Γ£ö |
| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | | | Croatian | `hr-HR` | | | | Γ£ö | | | Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
This section describes languages supported by Azure Video Indexer API.
| Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
| Estonian | `et-EE` | | | | Γ£ö | | | Fijian | `en-FJ` | | | | Γ£ö | | | Filipino | `fil-PH` | | | | Γ£ö | | | Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
| French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö \*| Γ£ö \*| Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö | Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
| Greek | `el-GR` | | | | Γ£ö | | | Haitian | `fr-HT` | | | | Γ£ö | | | Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hungarian | `hu-HU` | | | | Γ£ö | | | Indonesian | `id-ID` | | | | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Italian | `it-IT` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
| Kiswahili | `sw-KE` | | | | Γ£ö | | | Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Latvian | `lv-LV` | | | | Γ£ö | |
This section describes languages supported by Azure Video Indexer API.
| Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö | | Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Romanian | `ro-RO` | | | | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Russian | `ru-RU` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
| Samoan | `en-WS` | | | | Γ£ö | | | Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | | | Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | | | Slovak | `sk-SK` | | | | Γ£ö | |
-| Slovenian | `sl-SI` | | | | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| Slovenian as default languages, w | `sl-SI` | | | | Γ£ö |
+| Spanish | `es-ES` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö | | Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Tamil | `ta-IN` | | | | Γ£ö | |
This section describes languages supported by Azure Video Indexer API.
| Urdu | `ur-PK` | | | | Γ£ö | | | Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-\*By default, languages marked with * (in the table above) are supported by language identification (LID) or/and multi-language identification (MLID) auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API, you can specify to use other supported languages, from the table above, by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
+### Change default languages supported by LID and MLID
+
+Languages marked with * (in the table above) are used as default when auto-detecting languages by LID or/and MLID. You can specify to use other supported languages (listed in the table above) as default languages, when [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API and passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
> [!NOTE]
-> To change the default languages to auto-detect one or more languages by LID or MLID, set the `customLanguages` parameter.
+> To change the default languages that you want for LID or MLID to use when auto-detecting, call [upload a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and set the `customLanguages` parameter.
## Language support in frontend experiences
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
You can now edit the name of the speakers in the transcription using the Azure V
### Word level time annotation with confidence score
-An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file.
- Now supporting word level time annotation with confidence score.
+An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file.
+ ### Azure Monitor integration enabling indexing logs The new set of logs, described below, enables you to better monitor your indexing pipeline.
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
# Deploy disaster recovery using JetStream DR software
-[JetStream DR](https://www.jetstreamsoft.com/product-portfolio/jetstream-dr/) is a cloud-native disaster recovery solution designed to minimize downtime of virtual machines (VMs) if there was a disaster. Instances of JetStream DR are deployed at both the protected and recovery sites.
+[JetStream DR](https://www.jetstreamsoft.com/product-portfolio/jetstream-dr/) is a cloud-native disaster recovery solution designed to minimize downtime of virtual machines (VMs) if there is a disaster. Instances of JetStream DR are deployed at both the protected and recovery sites.
JetStream is built on the foundation of Continuous Data Protection (CDP), using [VMware vSphere API for I/O filtering (VAIO) framework](https://core.vmware.com/resource/vmware-vsphere-apis-io-filtering-vaio), which enables minimal or close to no data loss. JetStream DR provides the level of protection wanted for business and mission-critical applications. It also enables cost-effective DR by using minimal resources at the DR site and using cost-effective cloud storage, such as [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/).
To learn more about JetStream DR, see:
| Items | Description | | | |
-| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. MSA is installed on a vSphere node by the cloud admin. The MSA implements a vCenter Server plugin that allows you to manage JetStream DR natively from vCenter Server. The MSA doesn't handle replication data of protected VMs. |
-| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It's responsible for storing the replication data at the DR site, typically in an object store such as Azure Blob Storage. Depending on the number of protected VMs and the amount of storage to replicate, the private cloudadmin can create one or more DRVA instances. |
-| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs IO and sends the replication data to the DRVA. |
-| **JetStream protection domain** | Logical group of VMs that will be protected together using the same policies and run book. The data for all VMs in a protection domain is stored in the same Azure Blob container instance. The same DRVA instance handles replication to remote DR storage for all VMs in a protection domain. |
-| **Azure Blob Storage containers** | The protected VMs replicated data is stored in Azure Blobs. JetStream software creates one Azure Blob container instance for each JetStream protection domain. |
+| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. The MSA is deployed from an OVA on a vSphere node by the cloud admin. The MSA collects and maintains statistics relevant to VM protection and implements a vCenter plugin that allows you to manage JetStream DR natively with the vSphere Client. The MSA doesn't handle replication data of protected VMs. |
+| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It maintains the replication log and manages the transfer of the VMs and their data to the object store such as Azure Blob Storage. Depending on the number of protected VMs and the amount of VM data to replicate, the private cloud admin can create one or more DRVA instances. |
+| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs IO and sends the replication data to the DRVA. The IO filters also monitor relevant events, such as vMotion, Storage vMotion, snapshots, etc. |
+| **JetStream Protected Domain** | Logical group of VMs that will be protected together using the same policies and runbook. The data for all VMs in a protection domain is stored in the same Azure Blob container instance. A single DRVA instance handles replication to remote DR storage for all VMs in a Protected Domain. |
+| **Azure Blob Storage containers** | The protected VMs replicated data is stored in Azure Blobs. JetStream software creates one Azure Blob container instance for each JetStream Protected Domain. |
To install JetStream DR in the on-premises data center and in the Azure VMware S
- Configure the cluster with the IO filter package (install JetStream VIB). - Provision Azure Blob (Azure Storage Account) in the same region as the DR Azure VMware Solution cluster. - Deploy the disaster recovery virtual appliance (DRVA) and assign a replication log volume (VMDK from existing datastore or shared iSCSI storage).
- - Create protected domains (groups of related VMs) and assign DRVAs and the Azure Blob Storage/ANF.
+ - Create Protected Domains (groups of related VMs) and assign DRVAs and the Azure Blob Storage/ANF.
- Start protection. - Install JetStream DR in the Azure VMware Solution private cloud:
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 10/31/2022 Last updated : 11/01/2022
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup. <br><br> You can back up and restore disks encrypted using platform-managed keys (PMKs) or customer-managed keys (CMKs). You can also assign a disk-encryption set while restoring in the same region (that is providing disk-encryption set while performing cross-region restore is currently not supported, however, you can assign the DES to the restored disk after the restore is complete).
-Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2020. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
+Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2022. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Disks enabled for access with private EndPoint | Unsupported. Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported.
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Previously updated : 03/07/2022 Last updated : 11/01/2022+++ # Geo-code mapping This sample XML provides you an insight about the geo-codes mapped with the respective regions. Use these geo-codes to create and add custom DNS zones for private endpoint for Recovery Services vault.
+## Fetch mapping details
+
+To fetch the geo-code mapping list, run the following command:
+
+```azurecli-interactive
+ az cli list-locations
+```
+ ## Mapping details ```xml
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 12/13/2021 Last updated : 10/31/2022
This article discusses best practices and useful tips for using the Azure Batch
### Pool configuration and naming -- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
+- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
-- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools don't support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
-- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
+- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job isn't long, don't allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
-- **Multiple compute nodes:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
+- **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
-- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It is your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
-- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can do this by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
-- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
+- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This principle is especially important for time-sensitive workloads. For example, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
-- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size; Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change.
+- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size using [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change.
### Pool security #### Isolation boundary
-For the purposes of isolation, if your scenario requires isolating jobs or tasks from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools are not visible or able to communicate with each other. Avoid using separate Batch accounts as a means of security isolation unless the larger environment from which the Batch account operates in requires isolation.
+For the purposes of isolation, if your scenario requires isolating jobs or tasks from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools aren't visible or able to communicate with each other. Avoid using separate Batch accounts as a means of security isolation unless the larger environment from which the Batch account operates in requires isolation.
#### Batch Node Agent updates
-Batch node agents are not automatically upgraded for pools that have non-zero compute nodes. In order to ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It is recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions and when they were released so that you can plan to upgrade to the latest agent version.
+Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version.
-Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you are experiencing issues with your Batch pool or compute nodes, as discussed in the [Nodes](#nodes) section.
+Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This is further discussed in the [Nodes](#nodes) section.
> [!NOTE] > For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
Pool lifetime can vary depending upon the method of allocation and options appli
- **Pool recreation:** Avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool. -- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for Azure resources that are utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it is in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
+- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
- **Ephemeral OS disks:** Virtual Machine Configuration pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks. ### Pool allocation failures
-Pool allocation failures can happen at any point during first allocation or subsequent resizes. This can be due to temporary capacity exhaustion in a region or failures in other Azure services that Batch relies on. Your core quota is not a guarantee but rather a limit.
+Pool allocation failures can happen at any point during first allocation or subsequent resizes. These failures can be due to temporary capacity exhaustion in a region or failures in other Azure services that Batch relies on. Your core quota isn't a guarantee but rather a limit.
### Unplanned downtime
-It's possible for Batch pools to experience downtime events in Azure. Keep this in mind when planning and developing your scenario or workflow for Batch. If nodes fail, Batch automatically attempts to recover these compute nodes on your behalf. This may trigger rescheduling any running task on the node that is recovered. To learn more about interrupted tasks, see [Designing for retries](#design-for-retries-and-re-execution).
+It's possible for Batch pools to experience downtime events in Azure. Understanding that problems can arise and you should develop your workflow to be resilient to re-executions. If nodes fail, Batch automatically attempts to recover these compute nodes on your behalf. This recovery may trigger rescheduling any running task on the node that is restored or on a different, available node. To learn more about interrupted tasks, see [Designing for retries](#design-for-retries-and-re-execution).
### Custom image pools
-When you create an Azure Batch pool using the Virtual Machine Configuration, you specify a VM image that provides the operating system for each compute node in the pool. You can create the pool with a supported Azure Marketplace image, or you can [create a custom image with an Azure Compute Gallery image](batch-sig-images.md). While you can also use a [managed image](batch-custom-images.md) to create a custom image pool, we recommend creating custom images using the Azure Compute Gallery whenever possible. Using the Azure Compute Gallery helps you provision pools faster, scale larger quantities of VMs, and improve reliability when provisioning VMs.
+When you create an Azure Batch pool using the Virtual Machine Configuration, you specify a VM image that provides the operating system for each compute node in the pool. You can create the pool with a supported Azure Marketplace image, or you can [create a custom image with an Azure Compute Gallery image](batch-sig-images.md). While you can also use a [managed image](batch-custom-images.md) to create a custom image pool, we recommend creating custom images using the Azure Compute Gallery whenever possible. Using the Azure Compute Gallery helps you provision pools faster, scale larger quantities of VMs, and improves reliability when provisioning VMs.
### Third-party images
A [job](jobs-and-tasks.md#jobs) is a container designed to contain hundreds, tho
### Fewer jobs, more tasks
-Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. Running 1000 jobs, each with a single task, would be the least efficient, slowest, and most expensive approach to take.
+Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take.
-Because of this, avoid designing a Batch solution that requires thousands of simultaneously active jobs. There is no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas).
+Avoid designing a Batch solution that requires thousands of simultaneously active jobs. There's no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas).
### Job lifetime A Batch job has an indefinite lifetime until it's deleted from the system. Its state designates whether it can accept more tasks for scheduling or not.
-A job does not automatically move to completed state unless explicitly terminated. This can be automatically triggered through the [onAllTasksComplete](/dotnet/api/microsoft.azure.batch.common.onalltaskscomplete) property or [maxWallClockTime](/rest/api/batchservice/job/add#jobconstraints).
+A job doesn't automatically move to completed state unless explicitly terminated. This action can be automatically triggered through the [onAllTasksComplete](/dotnet/api/microsoft.azure.batch.common.onalltaskscomplete) property or [maxWallClockTime](/rest/api/batchservice/job/add#jobconstraints).
-There is a default [active job and job schedule quota](batch-quota-limit.md#resource-quotas). Jobs and job schedules in completed state do not count towards this quota.
+There's a default [active job and job schedule quota](batch-quota-limit.md#resource-quotas). Jobs and job schedules in completed state don't count towards this quota.
## Tasks
There is a default [active job and job schedule quota](batch-quota-limit.md#reso
### Save task data
-Compute nodes are by their nature ephemeral. Batch features such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete), all the files on those nodes are also deleted. Because of this, a task should move its output off of the node it is running on and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store.
+Compute nodes are by their nature ephemeral. Batch features such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete), all the files on those nodes are also deleted. Because of this behavior, a task should move its output off of the node it's running on, and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store.
-Batch has integrated support Azure Storage to upload data via [OutputFiles](batch-task-output-files.md), as well as a variety of shared file systems, or you can perform the upload yourself in your tasks.
+Batch has integrated support Azure Storage to upload data via [OutputFiles](batch-task-output-files.md), and with various shared file systems, or you can perform the upload yourself in your tasks.
### Manage task lifetime
-Delete tasks when they are no longer needed, or set a [retentionTime](/dotnet/api/microsoft.azure.batch.taskconstraints.retentiontime) task constraint. If a `retentionTime` is set, Batch automatically cleans up the disk space used by the task when the `retentionTime` expires.
+Delete tasks when they're no longer needed, or set a [retentionTime](/dotnet/api/microsoft.azure.batch.taskconstraints.retentiontime) task constraint. If a `retentionTime` is set, Batch automatically cleans up the disk space used by the task when the `retentionTime` expires.
-Deleting tasks accomplishes two things. It ensures that you do not have a build-up of tasks in the job, which can make it harder to query/find the task you're interested in (because you'll have to filter through the Completed tasks). It also cleans up the corresponding task data on the node (provided `retentionTime` has not already been hit). This helps ensure that your nodes don't fill up with task data and run out of disk space.
+Deleting tasks accomplishes two things:
+
+- Ensures that you don't have a build-up of tasks in the job. This action will help avoid difficulty in finding the task you're interested in as you'll have to filter through the Completed tasks.
+- Cleans up the corresponding task data on the node (provided `retentionTime` hasn't already been hit). This action helps ensure that your nodes don't fill up with task data and run out of disk space.
### Submit large numbers of tasks in collection
Tasks can be submitted on an individual basis or in collections. Submit tasks in
### Set max tasks per node appropriately
-Batch supports oversubscribing tasks on nodes (running more tasks than a node has cores). It's up to you to ensure that your tasks "fit" into the nodes in your pool. For example, you may have a degraded experience if you attempt to schedule eight tasks that each consume 25% CPU usage onto one node (in a pool with `taskSlotsPerNode = 8`).
+Batch supports oversubscribing tasks on nodes (running more tasks than a node has cores). It's up to you to ensure that your tasks are right-sized for the nodes in your pool. For example, you may have a degraded experience if you attempt to schedule eight tasks that each consume 25% CPU usage onto one node (in a pool with `taskSlotsPerNode = 8`).
### Design for retries and re-execution
There are no design differences when executing your tasks on dedicated or [Spot
### Build durable tasks
-Tasks should be designed to withstand failure and accommodate retry. This is especially important for long running tasks. To do this, ensure tasks generate the same, single result even if they are run more than once. One way to achieve this is to make your tasks "goal seeking." Another way is to make sure your tasks are idempotent (tasks will have the same outcome no matter how many times they are run).
+Tasks should be designed to withstand failure and accommodate retry. This principle is especially important for long running tasks. Ensure that your tasks generate the same, single result even if they're run more than once. One way to achieve this outcome is to make your tasks "goal seeking." Another way is to make sure your tasks are idempotent (tasks will have the same outcome no matter how many times they're run).
A common example is a task to copy files to a compute node. A simple approach is a task that copies all the specified files every time it runs, which is inefficient and isn't built to withstand failure. Instead, create a task to ensure the files are on the compute node; a task that doesn't recopy files that are already present. In this way, the task picks up where it left off if it was interrupted. ### Avoid short execution time
-Tasks that only run for one to two seconds are not ideal. Try to do a significant amount of work in an individual task (10 second minimum, going up to hours or days). If each task is executing for one minute (or more), then the scheduling overhead as a fraction of overall compute time is small.
+Tasks that only run for one to two seconds aren't ideal. Try to do a significant amount of work in an individual task (10 second minimum, going up to hours or days). If each task is executing for one minute (or more), then the scheduling overhead as a fraction of overall compute time is small.
### Use pool scope for short tasks on Windows nodes
A [compute node](nodes-and-pools.md#nodes) is an Azure virtual machine (VM) or c
### Idempotent start tasks
-Just as with other tasks, the node [start task](jobs-and-tasks.md#start-task) should be idempotent, as it will be rerun every time the node boots. An idempotent task is simply one that produces a consistent result when run multiple times.
+As with other tasks, the node [start task](jobs-and-tasks.md#start-task) should be idempotent, as it will be rerun every time the node boots. An idempotent task is simply one that produces a consistent result when run multiple times.
### Isolated nodes
Consider using isolated VM sizes for workloads with compliance or regulatory req
### Manage long-running services via the operating system services interface
-Sometimes there is a need to run another agent alongside the Batch agent in the node. For example, you may want to gather data from the node and report it. We recommend that these agents be deployed as OS services, such as a Windows service or a Linux `systemd` service.
+Sometimes there's a need to run another agent alongside the Batch agent in the node. For example, you may want to gather data from the node and report it. We recommend that these agents be deployed as OS services, such as a Windows service or a Linux `systemd` service.
-When running these services, they must not take file locks on any files in Batch-managed directories on the node, because otherwise Batch will be unable to delete those directories due to the file locks. For example, if installing a Windows service in a start task, instead of launching the service directly from the start task working directory, copy the files elsewhere (or if the files exist just skip the copy). Then install the service from that location. When Batch reruns your start task, it will delete the start task working directory and create it again. This works because the service has file locks on the other directory, not the start task working directory.
+These services must not take file locks on any files in Batch-managed directories on the node, because otherwise Batch will be unable to delete those directories due to the file locks. For example, if installing a Windows service in a start task, instead of launching the service directly from the start task working directory, copy the files elsewhere (or if the files exist just skip the copy). Then install the service from that location. When Batch reruns your start task, it will delete the start task working directory and create it again.
### Avoid creating directory junctions in Windows Directory junctions, sometimes called directory hard-links, are difficult to deal with during task and job cleanup. Use symlinks (soft-links) rather than hard-links.
+### Temporary disks and `AZ_BATCH_NODE_ROOT_DIR`
+
+Batch relies on VM temporary disks, for Batch-compatible VM sizes, to store metadata related to task execution along with any artifacts of each task
+execution on this temporary disk. Examples of these temporary disk mount points or directories are: `/mnt/batch`, `/mnt/resource/batch`, and `D:\batch\tasks`.
+Replacing, remounting, junctioning, symlinking, or otherwise redirecting these mount points and directories or any of the parent directories
+isn't supported and can lead to instability. If you require more disk space, consider using a VM size or family that has temporary
+disk space that meets your requirements or [attaching data disks](/rest/api/batchservice/pool/add#datadisk). For more information, see the next
+section about attaching and preparing data disks for compute nodes.
+
+### Attaching and preparing data disks
+
+Each individual compute node will have the exact same data disk specification attached if specified as part of the Batch pool instance. Only
+new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted or
+mounted. It's your responsibility to perform these operations as part of your [start task](jobs-and-tasks.md#start-task). These start tasks
+must be crafted to be idempotent. A re-execution of the start task after the compute node has been provisioned is possible. If the start
+task isn't idempotent, potential data loss can occur on the data disks.
+
+#### Preparing data disks in Linux Batch pools
+
+Azure data disks in Linux are presented as block devices and assigned a typical `sd[X]` identifier. You shouldn't rely on static `sd[X]`
+assignments as these labels are dynamically assigned at boot time and aren't guaranteed to be consistent between the first and any subsequent
+boots. You should identify your attached disks through the mappings presented in `/dev/disk/azure/scsi1/`. For example, if you specified LUN 0
+for your data disk in the AddPool API, then this disk would manifest as `/dev/disk/azure/scsi1/lun0`. As an example, if you were to list this
+directory, you could potentially see:
+
+```
+user@host:~$ ls -l /dev/disk/azure/scsi1/
+total 0
+lrwxrwxrwx 1 root root 12 Oct 31 15:16 lun0 -> ../../../sdc
+```
+
+There's no need to translate the reference back to the `sd[X]` mapping in your preparation script, instead refer to the device directly.
+In this example, this device would be `/dev/disk/azure/scsi1/lun0`. You could provide this ID directly to `fdisk`, `mkfs`, and any other
+tooling required for your workflow.
+
+For more information about Azure data disks in Linux, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-cli.md).
+
+#### Preparing data disks in Windows Batch pools
+
+Azure data disks attached to Batch Windows compute nodes are presented unpartitioned and unformatted. You'll need to enumerate disks
+with `RAW` partitions for actioning as part of your start task. This information can be retrieved using the `Get-Disk` PowerShell cmdlet.
+As an example, you could potentially see:
+
+```
+PS C:\Windows\system32> Get-Disk
+
+Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size Partition
+ Style
+ - - -- - -
+0 Virtual HD Healthy Online 30 GB MBR
+1 Virtual HD Healthy Online 32 GB MBR
+2 Msft Virtu... Healthy Online 64 GB RAW
+```
+
+Where disk number 2 is the uninitialized data disk attached to this compute node. These disks can then be initialized, partitioned,
+and formatted as required for your workflow.
+
+For more information about Azure data disks in Linux, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-powershell.md).
+ ### Collect Batch agent logs If you notice a problem involving the behavior of a node or tasks running on a node, collect the Batch agent logs prior to deallocating the nodes in question. The Batch agent logs can be collected using the Upload Batch service logs API. These logs can be supplied as part of a support ticket to Microsoft and will help with issue troubleshooting and resolution.
Review the following guidance related to connectivity in your Batch solutions.
### Network Security Groups (NSGs) and User Defined Routes (UDRs)
-When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you are closely following the guidelines regarding the use of the `BatchNodeManagement` service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; do not use underlying Batch service IP addresses as these can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
+When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the `BatchNodeManagement.<region>` service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
-For User Defined Routes (UDRs), it is recommended to use `BatchNodeManagement` [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as these can change over time.
+For User Defined Routes (UDRs), it's recommended to use `BatchNodeManagement.<region>` [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as they can change over time.
### Honoring DNS
-Ensure that your systems honor DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service do not rely on IP addresses (or [create a pool with static public IP addresses](create-pool-public-ip.md) as described below).
+Ensure that your systems honor DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service don't rely on IP addresses.
-If your requests receive 5xx level HTTP responses and there is a "Connection: close" header in the response, your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
+If your requests receive 5xx level HTTP responses and there's a "Connection: close" header in the response, your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
### Retry requests automatically
Ensure that your Batch service clients have appropriate retry policies in place
### Static public IP addresses
-Typically, virtual machines in a Batch pool are accessed through public IP addresses that can change over the lifetime of the pool. This can make it difficult to interact with a database or other external service that limits access to certain IP addresses. To ensure that the public IP addresses in your pool don't change unexpectedly, you can create a pool using a set of static public IP addresses that you control. For more information, see [Create an Azure Batch pool with specified public IP addresses](create-pool-public-ip.md).
+Typically, virtual machines in a Batch pool are accessed through public IP addresses that can change over the lifetime of the pool. This dynamic nature can make it difficult to interact with a database or other external service that limits access to certain IP addresses. To address this concern, you can create a pool using a set of static public IP addresses that you control. For more information, see [Create an Azure Batch pool with specified public IP addresses](create-pool-public-ip.md).
### Testing connectivity with Cloud Services configuration
-You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol is not permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.yml#can-i-ping-a-cloud-service-).
+You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol isn't permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.yml#can-i-ping-a-cloud-service-).
## Batch node underlying dependencies
Consider the following dependencies and restrictions when designing your Batch s
### System-created resources
-Azure Batch creates and manages a set of users and groups on the VM, which should not be altered. They are as follows:
+Azure Batch creates and manages a set of users and groups on the VM, which shouldn't be altered:
Windows:
Linux:
Batch actively tries to clean up the working directory that tasks are run in, once their retention time expires. Any files written outside of this directory are [your responsibility to clean up](#manage-task-lifetime) to avoid filling up disk space.
-The automated cleanup for the working directory will be blocked if you run a service on Windows from the startTask working directory, due to the folder still being in use. This will result in degraded performance. To fix this, change the directory for that service to a separate directory that isn't managed by Batch.
+The automated cleanup for the working directory will be blocked if you run a service on Windows from the start task working directory, due to the folder still being in use. This action will lead to degraded performance. To fix this issue, change the directory for that service to a separate directory that isn't managed by Batch.
## Next steps
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-msft-http-debug-headers.md
X-Cache: PRIVATE_NOSTORE | This header is returned when the request cannot be ca
X-Cache: CONFIG_NOCACHE | This header is returned when the request is configured not to cache in the CDN profile. X-Cache: N/A | This header is returned when the request that was denied by Signed URL and Rules Set.
-For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend).
+For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend).
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
In order to **delete** your user settings Cloud Shell saves for you such as pref
Bash: ```
- token=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
- curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token"
+ token="Bearer $(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")"
+ curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"$token"
``` PowerShell:
cognitive-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/create-resource.md
+
+ Title: Create an Anomaly Detector resource
+
+description: Create an Anomaly Detector resource
++++++ Last updated : 11/01/2022++++
+# Create and Anomaly Detector resource
+
+Anomaly Detector service is a cloud-based Cognitive Service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal.
+
+## Create an Anomaly Detector resource in Azure portal
+
+1. Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+1. Once you have your Azure subscription, [create an Anomaly Detector resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal, and fill out the following fields:
+
+ - **Subscription**: Select your current subscription.
+ - **Resource group**: The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
+ - **Region**: Select your local region, see supported [Regions](../regions.md).
+ - **Name**: Enter a name for your resource. We recommend using a descriptive name, for example *multivariate-msft-test*.
+ - **Pricing tier**: The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of create a resource user experience](../media/create-resource/create-resource.png)
+
+1. Select **Identity** in the banner above and make sure you set the status as **On** which enables Anomaly Detector to visit your data in Azure in a secure way, then select **Review + create.**
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of enable managed identity](../media/create-resource/enable-managed-identity.png)
+
+1. Wait a few seconds until validation passed, and select **Create** button from the bottom-left corner.
+1. After you select create, you'll be redirected to a new page that says Deployment in progress. After a few seconds, you'll see a message that says, Your deployment is complete, then select **Go to resource**.
+
+## Get Endpoint URL and keys
+
+In your resource, select **Keys and Endpoint** on the left navigation bar, copy the **key** (both key1 and key2 will work) and **endpoint** values from your Anomaly Detector resource.. You'll need the key and endpoint values to connect your application to the Anomaly Detector API.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of copy key and endpoint user experience](../media/create-resource/copy-key-endpoint.png)
+
+That's it! You could start preparing your data for further steps!
+
+## Next steps
+
+* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
cognitive-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/prepare-data.md
+
+ Title: Prepare your data and upload to Storage Account
+
+description: Prepare your data and upload to Storage Account
++++++ Last updated : 11/01/2022++++
+# Prepare your data and upload to Storage Account
+
+Multivariate Anomaly Detection requires training to process your data, and an Azure Storage Account to store your data for further training and inference steps.
+
+## Data preparation
+
+First you need to prepare your data for training and inference.
+
+### Input data schema
+
+Multivariate Anomaly Detection supports two types of data schemas: **OneTable** and **MultiTable**. You could use either of these schemas to prepare your data and upload to Storage Account for further training and inference.
++
+#### Schema 1: OneTable
+**OneTable** is one CSV file that contains all the variables that you want to train a Multivariate Anomaly Detection model and one `timestamp` column. Download [One Table sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.csv)
+* The `timestamp` values should conform to *ISO 8601*; the values of other variables in other columns could be *integers* or *decimals* with any number of decimal places.
+
+* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
+
+ ***Example:***
+
+![Diagram of one table schema.](../media/prepare-data/onetable-schema.png)
+
+#### Schema 2: MultiTable
+
+**MultiTable** is multiple CSV files in one file folder, and each CSV file contains only two columns of one variable, with the exact column names of: **timestamp** and **value**. Download [Multiple Tables sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.zip) and unzip it.
+
+* The `timestamp` values should conform to *ISO 8601*; the `value` could be *integers* or *decimals* with any number of decimal places.
+
+* The name of the csv file will be used as the variable name and should be unique. For example, *temperature.csv* and *humidity.csv*.
+
+* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
+
+ ***Example:***
+
+> [!div class="mx-imgBorder"]
+> ![Diagram of multi table schema.](../media/prepare-data/multitable.png)
+
+> [!NOTE]
+> If your timestamps have hours, minutes, and/or seconds, ensure that they're properly rounded up before calling the APIs.
+> For example, if your data frequency is supposed to be one data point every 30 seconds, but you're seeing timestamps like "12:00:01" and "12:00:28", it's a strong signal that you should pre-process the timestamps to new values like "12:00:00" and "12:00:30".
+> For details, please refer to the ["Timestamp round-up" section](../concepts/best-practices-multivariate.md#timestamp-round-up) in the best practices document.
+
+## Upload your data to Storage Account
+
+Once you prepare your data with either of the two schemas above, you could upload your CSV file (OneTable) or your data folder (MultiTable) to your Storage Account.
+
+1. [Create a Storage Account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM), fill out the fields, which are similar to the steps when creating Anomaly Detector resource.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Azure Storage account setup page.](../media/prepare-data/create-blob.png)
+
+2. Select **Container** to the left in your Storage Account resource and select **+Container** to create one that will store your data.
+
+3. Upload your data to the container.
+
+ **Upload *OneTable* data**
+
+ Go to the container that you created, and select **Upload**, then choose your prepared CSV file and upload.
+
+ Once your data is uploaded, select your CSV file and copy the **blob URL** through the small blue button. (Please paste the URL somewhere convenient for further steps.)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of copy blob url for one table.](../media/prepare-data/onetable-copy-url.png)
+
+ **Upload *MultiTable* data**
+
+ Go to the container that you created, and select **Upload**, then select **Advanced**, and initiate a folder name in **Upload to folder**, and select all the variables in separate CSV files and upload.
+
+ Once your data is uploaded, go into the folder, and select one CSV file in the folder, copy the **blob URL** and only keep the part before the name of this CSV file, so the final blob URL should ***link to the folder***. (Please paste the URL somewhere convenient for further steps.)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of copy blob url for multi table.](../media/prepare-data/multitable-copy-url.png)
+
+4. Grant Anomaly Detector access to read the data in your Storage Account.
+ * In your container, select **Access Control(IAM)** to the left, select **+ Add** to **Add role assignment**. If you see the add role assignment is disabled, please contact your Storage Account owner to add Owner role to your Container.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of set access control UI.](../media/prepare-data/add-role-assignment.png)
+
+ * Search role of **Storage Blob Data Reader**, **click on it** and then select **Next**. Technically, the roles highlighted below and the *Owner* role all should work.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of add role assignment with reader roles selected.](../media/prepare-data/add-reader-role.png)
+
+ * Select assign access to **Managed identity**, and **Select Members**, then choose the anomaly detector resource that you created earlier, then select **Review + assign**.
+
+## Next steps
+
+* [Train a multivariate anomaly detection model](train-model.md)
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
+
+ Title: Embedded Speech - Speech service
+
+description: Embedded Speech is designed for on-device scenarios where cloud connectivity is intermittent or unavailable.
++++++ Last updated : 10/31/2022+
+zone_pivot_groups: programming-languages-set-thirteen
++
+# Embedded Speech (preview)
+
+Embedded Speech is designed for on-device [speech-to-text](speech-to-text.md) and [text-to-speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in medical equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](/azure/cognitive-services/containers/disconnected-containers).
+
+> [!IMPORTANT]
+> Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context).
+
+## Platform requirements
+
+Embedded speech is included with the Speech SDK (version 1.24.1 and higher) for C#, C++, and Java. Refer to the general [Speech SDK installation requirements](quickstarts/setup-platform.md) for programming language and target platform specific details.
+
+**Choose your target environment**
+
+# [Android](#tab/android)
+
+Requires Android 7.0 (API level 24) or higher on ARM64 (`arm64-v8a`) or ARM32 (`armeabi-v7a`) hardware.
+
+Embedded TTS with neural voices is only supported on ARM64.
+
+# [Linux](#tab/linux)
+
+Requires Linux on x64, ARM64, or ARM32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux).
+
+Embedded speech isn't supported on RHEL/CentOS 7.
+
+Embedded TTS with neural voices isn't supported on ARM32.
+
+# [macOS](#tab/macos)
+
+Requires 10.14 or newer on x64 or ARM64 hardware.
+
+# [Windows](#tab/windows)
+
+Requires Windows 10 or newer on x64 or ARM64 hardware.
+
+The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the Speech SDK.
+
+The Speech SDK for Java doesn't support Windows on ARM64.
+++
+## Limitations
+
+Embedded speech is only available with C#, C++, and Java SDKs. The other Speech SDKs, Speech CLI, and REST APIs don't support embedded speech.
+
+Embedded speech recognition only supports mono 16 bit, 16-kHz PCM-encoded WAV audio.
+
+Embedded neural voices only support 24-kHz sample rate.
+
+## Models and voices
+
+For embedded speech, you'll need to download the speech recognition models for [speech-to-text](speech-to-text.md) and voices for [text-to-speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
+
+## Embedded speech configuration
+
+For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device.
+
+Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech-to-text and text-to-speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
++
+```csharp
+// Provide the location of the models and voices.
+List<string> paths = new List<string>();
+paths.Add("C:\\dev\\embedded-speech\\stt-models");
+paths.Add("C:\\dev\\embedded-speech\\tts-voices");
+var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray());
+
+// For speech-to-text
+embeddedSpeechConfig.SetSpeechRecognitionModel(
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ Environment.GetEnvironmentVariable("MODEL_KEY"));
+
+// For text-to-speech
+embeddedSpeechConfig.SetSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ Environment.GetEnvironmentVariable("VOICE_KEY"));
+embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
+```
+
+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/csspeech/samples).
+
+- [C# (.NET 6.0)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/dotnetcore/embedded-speech)
+- [C# for Unity](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/unity/embedded-speech)
++
+> [!TIP]
+> The `GetEnvironmentVariable` function is defined in the [speech-to-text quickstart](get-started-speech-to-text.md) and [text-to-speech quickstart](get-started-text-to-speech.md).
+
+```cpp
+// Provide the location of the models and voices.
+vector<string> paths;
+paths.push_back("C:\\dev\\embedded-speech\\stt-models");
+paths.push_back("C:\\dev\\embedded-speech\\tts-voices");
+auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths);
+
+// For speech-to-text
+embeddedSpeechConfig->SetSpeechRecognitionModel((
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ GetEnvironmentVariable("MODEL_KEY"));
+
+// For text-to-speech
+embeddedSpeechConfig->SetSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ GetEnvironmentVariable("VOICE_KEY"));
+embeddedSpeechConfig->SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat::Riff24Khz16BitMonoPcm);
+```
+
+You can find ready to use embedded speech samples at [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/embedded-speech)
++
+```java
+// Provide the location of the models and voices.
+List<String> paths = new ArrayList<>();
+paths.add("C:\\dev\\embedded-speech\\stt-models");
+paths.add("C:\\dev\\embedded-speech\\tts-voices");
+var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths);
+
+// For speech-to-text
+embeddedSpeechConfig.setSpeechRecognitionModel(
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ System.getenv("MODEL_KEY"));
+
+// For text-to-speech
+embeddedSpeechConfig.setSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ System.getenv("VOICE_KEY"));
+embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
+```
+
+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/csspeech/samples).
+- [Java (JRE)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/jre/embedded-speech)
+- [Java for Android](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/android/embedded-speech)
++
+## Hybrid speech
+
+Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service by default and embedded speech as a fallback in case cloud connectivity is limited or slow.
+
+With hybrid speech configuration for [speech-to-text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
+
+With hybrid speech configuration for [text-to-speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request.
+
+## Cloud speech
+
+For cloud speech, you use the `SpeechConfig` object, as shown in the [speech-to-text quickstart](get-started-speech-to-text.md) and [text-to-speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration.
+
+## Next steps
+
+- [Quickstart: Recognize and convert speech to text](get-started-speech-to-text.md)
+- [Quickstart: Convert text to speech](get-started-text-to-speech.md)
cognitive-services How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-manage-settings.md
To access the settings page:
1. Sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal. 2. On Custom Translator portal, select the gear icon in the sidebar.-- ![Setting Link](media/how-to/how-to-settings.png) ## Associating Translator Subscription
cognitive-services How To Search Edit Delete Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-search-edit-delete-projects.md
Custom Translator gives you the ability to edit the name and description of a pr
## Delete a project You can delete a project when you no longer need it. Make sure the project doesn't have models in an active state such as deployed, training submitted, data processing, or deploying, otherwise, the delete operation will fail. The following steps describe how to delete a project.-- 1. Hover on any project record and select on the **trash bin** icon. ![Delete project](media/how-to/how-to-delete-project.png)
cognitive-services How To Upload Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-upload-document.md
Before uploading your documents, review the [document formats and naming convent
From [Custom Translator](https://portal.customtranslator.azure.ai) portal, Select the **Documents** tab to go to documents page. ![Document upload link](media/how-to/how-to-upload-1.png)-- 1. Select the **Upload files** button on the documents page. ![Upload document page](media/how-to/how-to-upload-2.png)
From [Custom Translator](https://portal.customtranslator.azure.ai) portal, Selec
**Upload history** tab. ![Upload document history dialog](media/how-to/how-to-upload-document-history.png)-- ## View upload history In upload history page you can view history of all document uploads details like document type, language pair, upload status etc.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/overview.md
Title: What is Custom Translator?
-description: Custom Translator offers similar capabilities to what Microsoft Translator Hub does for Statistical Machine Translation (SMT), but exclusively for Neural Machine Translation (NMT) systems.
+description: Custom Translator offers similar capabilities to what Microsoft Translator Hub does for Statistical Machine Translation (SMT), but exclusively for Neural Machine Translation (NMT) systems.
Last updated 02/25/2022
-#Customer intent: As a custom translator user, I want to understand what is Custom Translator, so that I can start using it.
# What is Custom Translator? Custom Translator is a feature of the Microsoft Translator service, which enables Translator enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.
-Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through the same cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl), that powers billions of translations every day.
+Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through the same cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl) that powers billions of translations every day.
The platform enables users to build and publish custom translation systems to and from English. Custom Translator supports more than three dozen languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md). This documentation contains the following article types:
-* [**Quickstarts**](./v2-preview/quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](./v2-preview/how-to/create-manage-workspace.md) contain instructions for using the feature in more specific or customized ways.
+* [**Quickstarts**](./v2.0/quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](./v2.0/how-to/create-manage-workspace.md) contain instructions for using the feature in more specific or customized ways.
## Features
Custom Translator provides different features to build custom translation system
|Feature |Description | ||| |[Apply neural machine translation technology](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) | Improve your translation by applying neural machine translation (NMT) provided by Custom translator. |
-|[Build systems that knows your business terminology](./v2-preview/beginners-guide.md) | Customize and build translation systems using parallel documents, that understand the terminologies used in your own business and industry. |
-|[Use a dictionary to build your models](./v2-preview/how-to/train-custom-model.md#when-to-select-dictionary-only-training) | If you don't have training data set, you can train a model with only dictionary data. |
-|[Collaborate with others](./v2-preview/how-to/create-manage-workspace.md#manage-workspace-settings) | Collaborate with your team by sharing your work with different people. |
-|[Access your custom translation model](./v2-preview/how-to/translate-with-custom-model.md) | Your custom translation model can be accessed anytime by your existing applications/ programs via Microsoft Translator Text API V3. |
+|[Build systems that knows your business terminology](./v2.0/beginners-guide.md) | Customize and build translation systems using parallel documents that understand the terminologies used in your own business and industry. |
+|[Use a dictionary to build your models](./v2.0/how-to/train-custom-model.md#when-to-select-dictionary-only-training) | If you don't have training data set, you can train a model with only dictionary data. |
+|[Collaborate with others](./v2.0/how-to/create-manage-workspace.md#manage-workspace-settings) | Collaborate with your team by sharing your work with different people. |
+|[Access your custom translation model](./v2.0/how-to/translate-with-custom-model.md) | Your custom translation model can be accessed anytime by your existing applications/ programs via Microsoft Translator Text API V3. |
## Get better translations
If the appropriate type and amount of training data is supplied, it's not uncomm
With [Custom Translator](https://portal.customtranslator.azure.ai), training and deploying a custom system doesn't require any programming skills.
-Using the secure [Custom Translator](https://portal.customtranslator.azure.ai) portal, users can upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
+The secure [Custom Translator](https://portal.customtranslator.azure.ai) portal enables users to upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
-[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/) (currently in preview). The API allows users to manage creating or updating training through their own app or webservice.
+[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/). The API allows users to manage creating or updating training through their own app or webservice.
The cost of using a custom model to translate content is based on the user's Translator Text API pricing tier. See the Cognitive Services [Translator Text API pricing webpage](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) for pricing tier details. ## Securely translate anytime, anywhere on all your apps and services
-Custom systems can be seamlessly accessed and integrated into any product or business workflow, and on any device, via the Microsoft Translator Text API through standard REST technology.
+Custom systems can be seamlessly accessed and integrated into any product or business workflow and on any device via the Microsoft Translator Text REST API.
## Next steps -- Read about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
+* Read about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
-- With [Quickstart](./v2-preview/quickstart.md) learn to build a translation model in Custom Translator.
+* With [Quickstart](./v2.0/quickstart.md) learn to build a translation model in Custom Translator.
cognitive-services Quickstart Build Deploy Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart-build-deploy-custom-model.md
On the Custom Translator portal landing page, select **New Project**. On the dia
your project. For more details, visit [Create Project](how-to-create-project.md). ![Create project](media/how-to/how-to-create-project.png)-- ## Upload documents Next, upload [training](training-and-model.md#training-document-type-for-custom-translator), [tuning](training-and-model.md#tuning-document-type-for-custom-translator) and [testing](training-and-model.md#testing-dataset-for-custom-translator) document sets. You can upload both [parallel](what-are-parallel-documents.md) and combo documents. You can also upload [dictionary](what-is-dictionary.md).
cognitive-services Beginners Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/beginners-guide.md
+
+ Title: Custom Translator for beginners
+
+description: A user guide for understanding the end-to-end customized machine translation process.
+++++ Last updated : 02/25/2022++
+# Custom Translator for beginners
+
+ [Custom Translator](../overview.md) enables you to a build translation system that reflects your business, industry, and domain-specific terminology and style. Training and deploying a custom system is easy and doesn't require any programming skills. The customized translation system seamlessly integrates into your existing applications, workflows, and websites and is available on Azure through the same cloud-based [Microsoft Text Translator API](../../reference/v3-0-translate.md?tabs=curl) service that powers billions of translations every day.
+
+The platform enables users to build and publish custom translation systems to and from English. The Custom Translator supports more than three dozen languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../../language-support.md).
+
+## Is a custom translation model the right choice for me?
+
+A well-trained custom translation model provides more accurate domain-specific translations because it relies on previously translated in-domain documents to learn preferred translations. Translator uses these terms and phrases in context to produce fluent translations in the target language while respecting context-dependent grammar.
+
+Training a full custom translation model requires a substantial amount of data. If you don't have at least 10,000 sentences of previously trained documents, you won't be able to train a full-language translation model. However, you can either train a dictionary-only model or use the high-quality, out-of-the-box translations available with the Text Translator API.
++
+## What does training a custom translation model involve?
+
+Building a custom translation model requires:
+
+* Understanding your use-case.
+
+* Obtaining in-domain translated data (preferably human translated).
+
+* The ability to assess translation quality or target language translations.
+
+## How do I evaluate my use-case?
+
+Having clarity on your use-case and what success looks like is the first step towards sourcing proficient training data. Here are a few considerations:
+
+* What is your desired outcome and how will you measure it?
+
+* What is your business domain?
+
+* Do you have in-domain sentences of similar terminology and style?
+
+* Does your use-case involve multiple domains? If yes, should you build one translation system or multiple systems?
+
+* Do you have requirements impacting regional data residency at-rest and in-transit?
+
+* Are the target users in one or multiple regions?
+
+## How should I source my data?
+
+Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
+
+* Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+
+* Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
+
+* Can you crawl online portals to collect source sentences and synthesize target sentences?
+
+## What should I use for training material?
+
+| Source | What it does | Rules to follow |
+||||
+| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](../what-is-bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
+| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
+| Test documents | Calculate the [BLEU score](../what-is-bleu-score.md?WT.mc_id=aiml-43548-heboelma).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
+| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+
+## What is a BLEU score?
+
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+
+A BLEU score is a number between zero and 100. A score of zero indicates a low quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100 - a BLEU score between 40 and 60 indicates a high-quality translation.
+
+[Read more](../what-is-bleu-score.md?WT.mc_id=aiml-43548-heboelma)
+
+## What happens if I don't submit tuning or testing data?
+
+Tuning and test sentences are optimally representative of what you plan to translate in the future. If you don't submit any tuning or testing data, Custom Translator will automatically exclude sentences from your training documents to use as tuning and test data.
+
+| System-generated | Manual-selection |
+|||
+| Convenient. | Enables fine-tuning for your future needs.|
+| Good, if you know that your training data is representative of what you are planning to translate. | Provides more freedom to compose your training data.|
+| Easy to redo when you grow or shrink the domain. | Allows for more data and better domain coverage.|
+|Changes each training run.| Remains static over repeated training runs|
+
+## How is training material processed by Custom Translator?
+
+To prepare for training, documents undergo a series of processing and filtering steps. These steps are explained below. Knowledge of the filtering process may help with understanding the sentence count displayed as well as the steps you can take to prepare training documents for training with Custom Translator.
+
+* ### Sentence alignment
+
+ If your document isn't in XLIFF, XLSX, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence-by-sentence. Translator doesn't perform document alignmentΓÇöit follows your naming convention for the documents to find a matching document in the other language. Within the source text, Custom Translator tries to find the corresponding sentence in the target language. It uses document markup like embedded HTML tags to help with the alignment.
+
+ If you see a large discrepancy between the number of sentences in the source and target documents, your source document may not be parallel, or couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel.
+
+* ### Extracting tuning and testing data
+
+ Tuning and testing data is optional. If you don't provide it, the system will remove an appropriate percentage from your training documents to use for tuning and testing. The removal happens dynamically as part of the training process. Since this step occurs as part of training, your uploaded documents aren't affected. You can see the final used sentence counts for each category of dataΓÇötraining, tuning, testing, and dictionaryΓÇöon the Model details page after training has succeeded.
+
+* ### Length filter
+
+ * Removes sentences with only one word on either side.
+ * Removes sentences with more than 100 words on either side. Chinese, Japanese, Korean are exempt.
+ * Removes sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
+ * Removes sentences with more than 2000 characters for Chinese, Japanese, Korean.
+ * Removes sentences with less than 1% alphanumeric characters.
+ * Removes dictionary entries containing more than 50 words.
+
+* ### White space
+
+ * Replaces any sequence of white-space characters including tabs and CR/LF sequences with a single space character.
+ * Removes leading or trailing space in the sentence.
+
+* ### Sentence end punctuation
+
+ * Replaces multiple sentence-end punctuation characters with a single instance. Japanese character normalization.
+
+ * Converts full width letters and digits to half-width characters.
+
+* ### Unescaped XML tags
+
+ Transforms unescaped tags into escaped tags:
+
+ | Tag | Becomes |
+ |||
+ | \&lt; | \&amp;lt; |
+ | \&gt; | \&amp;gt; |
+ | \&amp; | \&amp;amp; |
+
+* ### Invalid characters
+
+ Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.
+
+## What steps should I take before uploading data?
+
+* Remove sentences with invalid encoding.
+* Remove Unicode control characters.
+* If feasible, align sentences (source-to-target).
+* Remove source and target sentences that don't match the source and target languages.
+* When source and target sentences have mixed languages, ensure that untranslated words are intentional, for example, names of organizations and products.
+* Correct grammatical and typographical errors to prevent teaching these errors to your model.
+* Though our training process handles source and target lines containing multiple sentences, it's better to have one source sentence mapped to one target sentence.
+
+## How do I evaluate the results?
+
+After your model is successfully trained, you can view the model's BLEU score and baseline model BLEU score on the model details page. We use the same set of test data to generate both the model's BLEU score and the baseline BLEU score. This data will help you make an informed decision regarding which model would be better for your use-case.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try our Quickstart](quickstart.md)
cognitive-services Create Manage Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/create-manage-project.md
+
+ Title: Create and manage a project
+
+description: How to create and manage a project in the Azure Cognitive Services Custom Translator.
++++ Last updated : 01/20/2022++++
+# Create and manage a project
+
+A project contains translation models for one language pair. Each project includes all documents that were uploaded into that workspace with the correct language pair.
+
+Creating a project is the first step in building and publishing a model.
+
+## Create a project
+
+1. After you sign in, your default workspace is loaded. To create a project in different workspace, select **My workspaces**, then select a workspace name.
+
+1. Select **Create project**.
+
+1. Enter the following details about your project in the creation dialog:
+
+ - **Project name (required):** Give your project a unique, meaningful name. It's not necessary to mention the languages within the title.
+
+ - **Language pair (required):** Select the source and target languages from the dropdown list
+
+ - **Domain (required):** Select the domain from the dropdown list that's most appropriate for your project. The domain describes the terminology and style of the documents you intend to translate.
+
+ >[!Note]
+ >Select **Show advanced options** to add project label, project description, and domain description
+
+ - **Project label:** The project label distinguishes between projects with the same language pair and domain. As a best practice, here are a few tips:
+
+ - Use a label *only* if you're planning to build multiple projects for the same language pair and same domain and want to access these projects with a different Domain ID.
+
+ - Don't use a label if you're building systems for one domain only.
+
+ - A project label isn't required and not helpful to distinguish between language pairs.
+
+ - You can use the same label for multiple projects.
+
+ - **Project description:** A short summary about the project. This description has no influence over the behavior of the Custom Translator or your resulting custom system, but can help you differentiate between different projects.
+
+ - **Domain description:** Use this field to better describe the particular field or industry in which you're working. or example, if your category is medicine, you might add details about your subfield, such as surgery or pediatrics. The description has no influence over the behavior of the Custom Translator or your resulting custom system.
+
+1. Select **Create project**.
+
+ :::image type="content" source="../media/how-to/create-project-dialog.png" alt-text="Screenshot illustrating the create project fields.":::
+
+## Edit a project
+
+To modify the project name, project description, or domain description:
+
+1. Select the workspace name.
+
+1. Select the project name, for example, *English-to-German*.
+
+1. The **Edit and Delete** buttons should now be visible.
+
+ :::image type="content" source="../media/how-to/edit-project-dialog-1.png" alt-text="Screenshot illustrating the edit project fields":::
+
+1. Select **Edit** and fill in or modify existing text.
+
+ :::image type="content" source="../media/how-to/edit-project-dialog-2.png" alt-text="Screenshot illustrating detailed edit project fields.":::
+
+1. Select **Edit project** to save.
+
+## Delete a project
+
+1. Follow the [**Edit a project**](#edit-a-project) steps 1-3 above.
+
+1. Select **Delete** and read the delete message before you select **Delete project** to confirm.
+
+ :::image type="content" source="../media/how-to/delete-project-1.png" alt-text="Screenshot illustrating delete project fields.":::
+
+ >[!Note]
+ >If your project has a published model or a model that is currently in training, you will only be able to delete your project once your model is no longer published or training.
+ >
+ > :::image type="content" source="../media/how-to/delete-project-2.png" alt-text="Screenshot illustrating the unable to delete message.":::
+
+## Next steps
+
+- Learn [how to manage project documents](create-manage-training-documents.md).
+- Learn [how to train a model](train-custom-model.md).
+- Learn [how to test and evaluate model quality](view-model-test-translation.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Create Manage Training Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/create-manage-training-documents.md
+
+ Title: Build and upload training documents
+
+description: How to build and upload parallel documents (two documents where one is the origin and the other is the translation) using Custom Translator.
++++ Last updated : 01/20/2022++++
+# Build and manage training documents
+
+[Custom Translator](../../overview.md) enables you to build translation models that reflect your business, industry, and domain-specific terminology and style. Training and deploying a custom model is easy and doesn't require any programming skills. Custom Translator allows you to upload parallel files, translation memory files, or zip files.
+
+[Parallel documents](../../what-are-parallel-documents.md) are pairs of documents where one (target) is a translation of the other (source). One document in the pair contains sentences in the source language and the other document contains those sentences translated into the target language.
+
+Before uploading your documents, review the [document formats and naming convention guidance](../../document-formats-naming-convention.md) to make sure your file format is supported by Custom Translator.
+
+## How to create document sets
+
+Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
+
+- Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+
+- Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
+
+- Can you crawl online portals to collect source sentences and synthesize target sentences?
+
+### Training material for each document types
+
+| Source | What it does | Rules to follow |
+||||
+| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](../../what-is-bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
+| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
+| Test documents | Calculate the [BLEU score](../beginners-guide.md#what-is-a-bleu-score).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
+| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+
+## How to upload documents
+
+Document types are associated with the language pair selected when you create a project.
+
+1. Sign-in to [Custom Translator](https://portal.customtranslator.azure.ai) portal. Your default workspace is loaded and a list of previously created projects are displayed.
+
+1. Select the desired project **Name**. By default, the **Manage documents** blade is selected and a list of previously uploaded documents is displayed.
+
+1. Select **Add document set** and choose the document type:
+
+ - Training set
+ - Testing set
+ - Tuning set
+ - Dictionary set:
+ - Phrase Dictionary
+ - Sentence Dictionary
+
+1. Select **Next**.
+
+ :::image type="content" source="../media/how-to/upload-1.png" alt-text="Screenshot illustrating the document upload link.":::
+
+ >[!Note]
+ >Choosing **Dictionary set** launches **Choose type of dictionary** dialog.
+ >Choose one and select **Next**
+
+1. Select your documents format from the radio buttons.
+
+ :::image type="content" source="../media/how-to/upload-2.png" alt-text="Screenshot illustrating the upload document page.":::
+
+ - For **Parallel documents**, fill in the `Document set name` and select **Browse files** to select source and target documents.
+ - For **Translation memory (TM)** file or **Upload multiple sets with ZIP**, select **Browse files** to select the file
+
+1. Select **Upload**.
+
+At this point, Custom Translator is processing your documents and attempting to extract sentences as indicated in the upload notification. Once done processing, you'll see the upload successful notification.
+
+ :::image type="content" source="../media/quickstart/document-upload-notification.png" alt-text="Screenshot illustrating the upload document processing dialog window.":::
+
+## Next steps
+
+- Learn [how to train a model](train-custom-model.md).
+- Learn [how to test and evaluate model quality](view-model-test-translation.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/create-manage-workspace.md
+
+ Title: Create and manage a workspace
+
+description: How to create and manage workspaces
++++ Last updated : 08/17/2022+++++
+# Create and manage a workspace
+
+ Workspaces are places to manage your documents, projects, and models. When you create a workspace, you can choose to use the workspace independently, or share it with teammates to divide up the work.
+
+## Create workspace
+
+1. After you sign in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
+
+ :::image type="content" source="../media/quickstart/first-time-user.png" alt-text="Screenshot illustrating first-time sign-in.":::
+
+1. Select **My workspaces**
+
+1. Select **Create a new workspace**
+
+1. Type a **Workspace name** and select **Next**
+
+1. Select "Global" for **Select resource region** from the dropdown list.
+
+1. Copy/paste your Translator Services key.
+
+1. Select **Next**.
+
+1. Select **Done**
+
+ > [!NOTE]
+ > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2**.
+
+ > [!NOTE]
+ > All uploaded customer content, custom model binaries, custom model configurations, and training logs are kept encrypted-at-rest in the selected region.
+
+ :::image type="content" source="../media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
+
+ :::image type="content" source="../media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
+
+## Manage workspace settings
+
+Select a workspace and navigate to **Workspace settings**. You can manage the following workspace settings:
+
+* Change the resource key for global regions. If you're using a regional specific resource, you can't change your resource key.
+
+* Change the workspace name.
+
+* [Share the workspace with others](#share-workspace-for-collaboration).
+
+* Delete the workspace.
+
+### Share workspace for collaboration
+
+The person who created the workspace is the owner. Within **Workspace settings**, an owner can designate three different roles for a collaborative workspace:
+
+* **Owner**. An owner has full permissions within the workspace.
+
+* **Editor**. An editor can add documents, train models, and delete documents and projects. They can't modify who the workspace is shared with, delete the workspace, or change the workspace name.
+
+* **Reader**. A reader can view (and download if available) all information in the workspace.
+
+> [!NOTE]
+> The Custom Translator workspace sharing policy has changed. For additional security measures, you can share a workspace only with people who have recently signed in to the Custom Translator portal.
+
+1. Select **Share**.
+
+1. Complete the **email address** field for collaborators.
+
+1. Select **role** from the dropdown list.
+
+1. Select **Share**.
+++
+### Remove somebody from a workspace
+
+1. Select **Share**.
+
+2. Select the **X** icon next to the **Role** and email address that you want to remove.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to manage projects](create-manage-project.md)
cognitive-services Publish Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/publish-model.md
+
+ Title: Publish a custom model
+
+description: This article explains how to publish a custom model using the Azure Cognitive Services Custom Translator.
++++ Last updated : 01/20/2022+++
+# Publish a custom model
+
+Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
+
+## Publish your trained model
+
+You can publish one model per project to one or multiple regions.
+1. Select the **Publish model** blade.
+
+1. Select *en-de with sample data* and select **Publish**.
+
+1. Check the desired region(s).
+
+1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
+
+ :::image type="content" source="../media/quickstart/publish-model.png" alt-text="Screenshot illustrating the publish model blade.":::
+
+## Replace a published model
+
+To replace a published model, you can exchange the published model with a different model in the same region(s):
+
+1. Select the replacement model.
+
+1. Select **Publish**.
+
+1. Select **publish** once more in the **Publish model** dialog window.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to translate documents with custom models](translate-with-custom-model.md)
cognitive-services Train Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/train-custom-model.md
+
+ Title: Train model
+
+description: How to train a custom model
++++ Last updated : 01/20/2022+++
+# Train a custom model
+
+A model provides translations for a specific language pair. The outcome of a successful training is a model. To train a custom model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A minimum of 10,000 parallel training sentences are required to train a full model.
+
+## Create model
+
+1. Select the **Train model** blade.
+
+1. Type the **Model name**.
+
+1. Keep the default **Full training** selected or select **Dictionary-only training**.
+
+ >[!Note]
+ >Full training displays all uploaded document types. Dictionary-only displays dictionary documents only.
+
+1. Under **Select documents**, select the documents you want to use to train the model, for example, `sample-English-German` and review the training cost associated with the selected number of sentences.
+
+1. Select **Train now**.
+
+1. Select **Train** to confirm.
+
+ >[!Note]
+ >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
+
+ :::image type="content" source="../media/quickstart/train-model.png" alt-text="Screenshot illustrating the train model blade.":::
+
+## When to select dictionary-only training
+
+For better results, we recommended letting the system learn from your training data. However, when you don't have enough parallel sentences to meet the 10,000 minimum requirements, or sentences and compound nouns must be rendered as-is, use dictionary-only training. Your model will typically complete training much faster than with full training. The resulting models will use the baseline models for translation along with the dictionaries you've added. You won't see BLEU scores or get a test report.
+
+> [!Note]
+>Custom Translator doesn't sentence-align dictionary files. Therefore, it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned. If not, the document upload will fail.
+
+## Model details
+
+1. After successful model training, select the **Model details** blade.
+
+1. Select the **Model Name** to review training date/time, total training time, number of sentences used for training, tuning, testing, dictionary, and whether the system generated the test and tuning sets. You'll use `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU score](../beginners-guide.md#what-is-a-bleu-score). Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+
+ :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating model details fields.":::
+
+## Duplicate model
+
+1. Select the **Model details** blade.
+
+1. Hover over the model name and check the selection button.
+
+1. Select **Duplicate**.
+
+1. Fill in **New model name**.
+
+1. Keep **Train immediately** checked if no further data will be selected or uploaded, otherwise, check **Save as draft**
+
+1. Select **Save**
+
+ > [!Note]
+ >
+ > If you save the model as `Draft`, **Model details** is updated with the model name in `Draft` status.
+ >
+ > To add more documents, select on the model name and follow `Create model` section above.
+
+ :::image type="content" source="../media/how-to/duplicate-model.png" alt-text="Screenshot illustrating the duplicate model blade.":::
+
+## Next steps
+
+- Learn [how to test and evaluate model quality](view-model-test-translation.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/translate-with-custom-model.md
+
+ Title: Translate text with a custom model
+
+description: How to make translation requests using custom models published with the Azure Cognitive Services Custom Translator.
++++ Last updated : 01/20/2022+++
+# Translate text with a custom model
+
+After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
+
+## How to translate
+
+1. Use the `Category ID` when making a custom translation request via Microsoft Translator [Text API V3](../../../reference/v3-0-translate.md?tabs=curl). The `Category ID` is created by concatenating the WorkspaceID, project label, and category code. Use the `CategoryID` with the Text Translator API to get custom translations.
+
+ ```http
+ https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=de&category=a2eb72f9-43a8-46bd-82fa-4693c8b64c3c-TECH
+
+ ```
+
+ More information about the Translator Text API can be found on the [Translator API Reference](../../../reference/v3-0-translate.md) page.
+
+1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about building and publishing custom models](../beginners-guide.md)
cognitive-services View Model Test Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/how-to/view-model-test-translation.md
+
+ Title: View custom model details and test translation
+
+description: How to test custom model BLEU score and model translation
++++ Last updated : 01/20/2022+++
+# View custom model details and test translation
+
+Once your model has successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model [**BLEU score**](#bleu-score) and our standard model **Baseline BLEU**. If your models have been trained on a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
+
+## BLEU score
+
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+
+A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high-quality translation.
+
+[Read more](../../what-is-bleu-score.md?WT.mc_id=aiml-43548-heboelma)
+
+## Model details
+
+1. Select the **Model details** blade.
+
+1. Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU](../beginners-guide.md#what-is-a-bleu-score) score. Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means there's high translation quality using the custom model.
+
+ :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating the model detail.":::
+
+## Test quality of your model's translation
+
+1. Select **Test model** blade.
+
+1. Select model **Name**.
+
+1. Human evaluate translation from your **Custom model** and the **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set).
+
+1. If you're satisfied with the training results, place a deployment request for the trained model.
+
+## Next steps
+
+- Learn [how to publish/deploy a custom model](publish-model.md).
+- Learn [how to translate documents with a custom model](translate-with-custom-model.md).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2.0/quickstart.md
+
+ Title: "Quickstart: Build, deploy, and use a custom model - Custom Translator"
+
+description: A step-by-step guide to building a translation system using the Custom Translator portal v2.
++++ Last updated : 04/26/2022+++
+# Quickstart: Build, publish, and translate with custom models
+
+Translator is a cloud-based neural machine translation service that is part of the Azure Cognitive Services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, you'll learn to build custom solutions for your applications across all [supported languages](../../language-support.md).
+
+## Prerequisites
+
+ To use the [Custom Translator](https://portal.customtranslator.azure.ai/) portal, you'll need the resources:
+
+* A [Microsoft account](https://signup.live.com).
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have an Azure subscription, [create a Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code below later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="../../media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+ See [how to create a Translator resource](../../how-to-create-translator-resource.md).
+
+Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
+
+You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956).
+
+>[!Note]
+>Custom Translator does not support creating workspace for a Translator Text API resource created inside an [Enabled VNet](../../../../api-management/api-management-using-with-vnet.md?tabs=stv2).
+
+## Process summary
+
+1. [**Create a workspace**](#create-a-workspace). A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is done inside a specific workspace.
+
+1. [**Create a project**](#create-a-project). A project is a wrapper for models, documents, and tests. Each project includes all documents that are uploaded into that workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents will be included in both projects.
+
+1. [**Upload parallel documents**](#upload-documents). Parallel documents are pairs of documents where one (target) is the translation of the other (source). One document in the pair contains sentences in the source language and the other document contains sentences translated into the target language. It doesn't matter which language is marked as "source" and which language is marked as "target"ΓÇöa parallel document can be used to train a translation system in either direction.
+
+1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model.
+
+1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [BLEU](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system.
+
+1. [**Publish (deploy) your trained model**](#publish-your-model). Your custom model is made available for runtime translation requests.
+
+1. [**Translate text**](#translate-text). Use the cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../../reference/v3-0-translate.md?tabs=curl) to make translation requests.
+
+## Create a workspace
+
+1. After your sign-in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
+
+ :::image type="content" source="media/quickstart/first-time-user.png" alt-text="Screenshot illustrating how to create a workspace.":::
+
+1. Select **My workspaces**
+
+1. Select **Create a new workspace**
+
+1. Type _Contoso MT models_ for **Workspace name** and select **Next**
+
+1. Select "Global" for **Select resource region** from the dropdown list.
+
+1. Copy/paste your Translator Services key.
+
+1. Select **Next**.
+
+1. Select **Done**
+
+ >[!Note]
+ > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2.**
+
+ :::image type="content" source="media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
+
+ :::image type="content" source="media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
+
+## Create a project
+
+Once the workspace is created successfully, you'll be taken to the **Projects** page.
+
+You'll create English-to-German project to train a custom model with only a [training](../training-and-model.md#training-document-type-for-custom-translator) document type.
+
+1. Select **Create project**.
+
+1. Type *English-to-German* for **Project name**.
+
+1. Select *English (en)* as **Source language** from the dropdown list.
+
+1. Select *German (de)* as **Target language** from the dropdown list.
+
+1. Select *General* for **Domain** from the dropdown list.
+
+1. Select **Create project**
+
+ :::image type="content" source="media/quickstart/create-project.png" alt-text="Screenshot illustrating how to create a project.":::
+
+## Upload documents
+
+In order to create a custom model, you need to upload all or a combination of [training](../training-and-model.md#training-document-type-for-custom-translator), [tuning](../training-and-model.md#tuning-document-type-for-custom-translator), [testing](../training-and-model.md#testing-dataset-for-custom-translator), and [dictionary](../what-is-dictionary.md) document types.
+
+In this quickstart, you'll upload [training](../training-and-model.md#training-document-type-for-custom-translator) documents for customization.
+
+>[!Note]
+> You can use our sample training, phrase and sentence dictionaries dataset, [Customer sample English-to-German datasets](https://github.com/MicrosoftTranslator/CustomTranslatorSampleDatasets), for this quickstart. However, for production, it's better to upload your own training dataset.
+
+1. Select *English-to-German* project name.
+
+1. Select **Manage documents** from the left navigation menu.
+
+1. Select **Add document set**.
+
+1. Check the **Training set** box and select **Next**.
+
+1. Keep **Parallel documents** checked and type *sample-English-German*.
+
+1. Under the **Source (English - EN) file**, select **Browse files** and select *sample-English-German-Training-en.txt*.
+
+1. Under **Target (German - EN) file**, select **Browse files** and select *sample-English-German-Training-de.txt*.
+
+1. Select **Upload**
+
+ >[!Note]
+ >You can upload the sample phrase and sentence dictionaries dataset. This step is left for you to complete.
+
+ :::image type="content" source="media/quickstart/upload-model.png" alt-text="Screenshot illustrating how to upload documents.":::
+
+## Train your model
+
+Now you're ready to train your English-to-German model.
+
+1. Select **Train model** from the left navigation menu.
+
+1. Type *en-de with sample data* for **Model name**.
+
+1. Keep **Full training** checked.
+
+1. Under **Select documents**, check *sample-English-German* and review the training cost associated with the selected number of sentences.
+
+1. Select **Train now**.
+
+1. Select **Train** to confirm.
+
+ >[!Note]
+ >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
+
+ :::image type="content" source="media/quickstart/train-model.png" alt-text="Screenshot illustrating how to create a model.":::
+
+1. After successful model training, select **Model details** from the left navigation menu.
+
+1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+
+ >[!Note]
+ >If you train with our shared customer sample datasets, BLEU score will be different than the image.
+
+ :::image type="content" source="media/quickstart/model-details.png" alt-text="Screenshot illustrating model details.":::
+
+## Test your model
+
+Once your training has completed successfully, inspect the test set translated sentences.
+
+1. Select **Test model** from the left navigation menu.
+2. Select "en-de with sample data"
+3. Human evaluate translation from **New model** (custom model), and **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set)
+
+## Publish your model
+
+Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
+
+1. Select **Publish model** from the left navigation menu.
+
+1. Select *en-de with sample data* and select **Publish**.
+
+1. Check the desired region(s).
+
+1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
+
+ :::image type="content" source="media/quickstart/publish-model.png" alt-text="Screenshot illustrating how to deploy a trained model.":::
+
+## Translate text
+
+1. Developers should use the `Category ID` when making translation requests with Microsoft Translator [Text API V3](../../reference/v3-0-translate.md?tabs=curl). More information about the Translator Text API can be found on the [API Reference](../../reference/v3-0-reference.md) webpage.
+
+1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslator/releases/tag/V2.9.4).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to manage workspaces](how-to/create-manage-workspace.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Document Translation .NET and Python client-library SDKs are now generally avail
* The [Custom Translator portal (v2.0)](https://portal.customtranslator.azure.ai/) is now in public preview and includes significant changes that makes it easier to create your custom translation systems.
-* To learn more, see our Custom Translator [documentation](custom-translator/overview.md) and try our [quickstart](custom-translator/v2-preview/quickstart.md) for step-by-step instructions.
+* To learn more, see our Custom Translator [documentation](custom-translator/overview.md) and try our [quickstart](custom-translator/v2.0/quickstart.md) for step-by-step instructions.
## October 2021
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Cognitive Services for big data can use resources from any [supported region](ht
|Service Name|Service Description| |:--|:|
-|[Anomaly Detector](../anomaly-detector/index.yml "Anomaly Detector") | The Anomaly Detector (Preview) service allows you to monitor and detect abnormalities in your time series data.|
+|[Anomaly Detector](../anomaly-detector/index.yml "Anomaly Detector") | The Anomaly Detector service allows you to monitor and detect abnormalities in your time series data.|
### Language
cognitive-services Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-python.md
Previously updated : 08/16/2022 Last updated : 11/01/2022 ms.devlang: python
The samples in this article use these Cognitive
- Language service - get the sentiment (or mood) of a set of sentences. - Computer Vision - get the tags (one-word descriptions) associated with a set of images.-- Bing Image Search - search the web for images related to a natural language query. - Speech-to-text - transcribe audio files to extract text-based transcripts. - Anomaly Detector - detect anomalies within a time series data.
from mmlspark.cognitive import *
# A general Cognitive Services key for the Language service and Computer Vision (or use separate keys that belong to each service) service_key = "ADD_YOUR_SUBSCRIPION_KEY"
-# A Bing Search v7 subscription key
-bing_search_key = "ADD_YOUR_SUBSCRIPION_KEY"
# An Anomaly Dectector subscription key anomaly_key = "ADD_YOUR_SUBSCRIPION_KEY"
display(analysis.transform(df).select("image", "analysis_results.description.tag
| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg | ['outdoor' 'grass' 'house' 'building' 'old' 'home' 'front' 'small' 'church' 'stone' 'large' 'grazing' 'yard' 'green' 'sitting' 'leading' 'sheep' 'brick' 'bench' 'street' 'white' 'country' 'clock' 'sign' 'parked' 'field' 'standing' 'garden' 'water' 'red' 'horse' 'man' 'tall' 'fire' 'group']
-## Bing Image Search sample
-
-[Bing Image Search](../bing-image-search/overview.md) searches the web to retrieve images related to a user's natural language query. In this sample, we use a text query that looks for images with quotes. It returns a list of image URLs that contain photos related to our query.
-
-```python
-from pyspark.ml import PipelineModel
-
-# Number of images Bing will return per query
-imgsPerBatch = 10
-# A list of offsets, used to page into the search results
-offsets = [(i*imgsPerBatch,) for i in range(100)]
-# Since web content is our data, we create a dataframe with options on that data: offsets
-bingParameters = spark.createDataFrame(offsets, ["offset"])
-
-# Run the Bing Image Search service with our text query
-bingSearch = (BingImageSearch()
- .setSubscriptionKey(bing_search_key)
- .setOffsetCol("offset")
- .setQuery("Martin Luther King Jr. quotes")
- .setCount(imgsPerBatch)
- .setOutputCol("images"))
-
-# Transformer that extracts and flattens the richly structured output of Bing Image Search into a simple URL column
-getUrls = BingImageSearch.getUrlTransformer("images", "url")
-
-# This displays the full results returned, uncomment to use
-# display(bingSearch.transform(bingParameters))
-
-# Since we have two services, they are put into a pipeline
-pipeline = PipelineModel(stages=[bingSearch, getUrls])
-
-# Show the results of your search: image URLs
-display(pipeline.transform(bingParameters))
-```
-
-### Expected result
-
-| url |
-|:-|
-| https://iheartintelligence.com/wp-content/uploads/2019/01/powerful-quotes-martin-luther-king-jr.jpg |
-| http://everydaypowerblog.com/wp-content/uploads/2014/01/Martin-Luther-King-Jr.-Quotes-16.jpg |
-| http://www.sofreshandsogreen.com/wp-content/uploads/2012/01/martin-luther-king-jr-quote-sofreshandsogreendotcom.jpg |
-| https://everydaypowerblog.com/wp-content/uploads/2014/01/Martin-Luther-King-Jr.-Quotes-18.jpg |
-| https://tsal-eszuskq0bptlfh8awbb.stackpathdns.com/wp-content/uploads/2018/01/MartinLutherKingQuotes.jpg |
-- ## Speech-to-Text sample The [Speech-to-text](../speech-service/index-speech-to-text.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe two audio files. The first file is easy to understand, and the second is more challenging.
cognitive-services Samples Scala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-scala.md
Previously updated : 10/28/2021 Last updated : 11/01/2022 ms.devlang: scala
The samples use these Cognitive
- Language service - get the sentiment (or mood) of a set of sentences. - Computer Vision - get the tags (one-word descriptions) associated with a set of images.-- Bing Image Search - search the web for images related to a natural language query. - Speech-to-text - transcribe audio files to extract text-based transcripts. - Anomaly Detector - detect anomalies within a time series data.
display(analysis.transform(df).select(col("image"), col("results").getItem("tags
| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/dog.jpg | ['dog' 'outdoor' 'fence' 'wooden' 'small' 'brown' 'building' 'sitting' 'front' 'bench' 'standing' 'table' 'walking' 'board' 'beach' 'white' 'holding' 'bridge' 'track'] | https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg | ['outdoor' 'grass' 'house' 'building' 'old' 'home' 'front' 'small' 'church' 'stone' 'large' 'grazing' 'yard' 'green' 'sitting' 'leading' 'sheep' 'brick' 'bench' 'street' 'white' 'country' 'clock' 'sign' 'parked' 'field' 'standing' 'garden' 'water' 'red' 'horse' 'man' 'tall' 'fire' 'group']
-## Bing Image Search
-
-[Bing Image Search](../bing-image-search/overview.md) searches the web to retrieve images related to a user's natural language query. In this sample, we use a text query that looks for images with quotes. It returns a list of image URLs that contain photos related to our query.
--
-```scala
-import org.apache.spark.ml.Pipeline
-
-// Number of images Bing will return per query
-val imgsPerBatch = 10
-
-// A list of offsets, used to page into the search results
-val df = (0 until 100).map(o => Tuple1(o*imgsPerBatch)).toSeq.toDF("offset")
-
-// Run the Bing Image Search service with our text query
-val bingSearch = new BingImageSearch()
- .setSubscriptionKey(bingSearchKey)
- .setOffsetCol("offset")
- .setQuery("Martin Luther King Jr. quotes")
- .setCount(imgsPerBatch)
- .setOutputCol("images")
-
-// Transformer that extracts and flattens the richly structured output of Bing Image Search into a simple URL column
-val getUrls = BingImageSearch.getUrlTransformer("images", "url")
-
-// This displays the full results returned, uncomment to use
-// display(bingSearch.transform(bingParameters))
-
-// Since we have two services, they are put into a pipeline
-val pipeline = new Pipeline().setStages(Array(bingSearch, getUrls))
-
-// Show the results of your search: image URLs
-display(pipeline.fit(df).transform(df))
-```
-
-### Expected result
-
-| url |
-|:-|
-| https://iheartintelligence.com/wp-content/uploads/2019/01/powerful-quotes-martin-luther-king-jr.jpg |
-| http://everydaypowerblog.com/wp-content/uploads/2014/01/Martin-Luther-King-Jr.-Quotes-16.jpg |
-| http://www.sofreshandsogreen.com/wp-content/uploads/2012/01/martin-luther-king-jr-quote-sofreshandsogreendotcom.jpg |
-| https://everydaypowerblog.com/wp-content/uploads/2014/01/Martin-Luther-King-Jr.-Quotes-18.jpg |
-| https://tsal-eszuskq0bptlfh8awbb.stackpathdns.com/wp-content/uploads/2018/01/MartinLutherKingQuotes.jpg |
- ## Speech-to-Text The [Speech-to-text](../speech-service/index-speech-to-text.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe two audio files. The first file is easy to understand, and the second is more challenging.
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Previously updated : 06/16/2022 Last updated : 10/27/2022
Limited Access services are made available to customers under the terms governin
The following services are Limited Access:
+- [Embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context): All features
- [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context): Pro features - [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features - [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features, face ID property
Features of these services that aren't listed above are available without regist
Submit a registration form for each Limited Access service you would like to use:
+- [Embedded Speech](https://aka.ms/csgate-embedded-speech): All features
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features - [Face API](https://aka.ms/facerecognition): Identify and Verify features
Existing customers have until June 30, 2023 to submit a registration form and be
The registration forms can be found here:
+- [Embedded Speech](https://aka.ms/csgate-embedded-speech): All features
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features - [Face API](https://aka.ms/facerecognition): Identify and Verify features
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Previously updated : 08/10/2022 Last updated : 10/26/2022
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
Previously updated : 06/29/2022 Last updated : 10/26/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 10/21/2022 Last updated : 11/01/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 10/04/2022 Last updated : 11/01/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations) * v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for: * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
-* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and [reference documentation](/rest/api/language/) for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
* [Entity linking](./entity-linking/quickstart.md?pivots=rest-api) * [Language detection](./language-detection/quickstart.md?pivots=rest-api) * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
Azure Cognitive Services enable you to build applications that see, hear, speak
These Cognitive Services are language agnostic and don't have limitations based on human language.
-* [Anomaly Detector (Preview)](./anomaly-detector/index.yml)
+* [Anomaly Detector](./anomaly-detector/index.yml)
* [Custom Vision](./custom-vision-service/index.yml) * [Face](./computer-vision/index-identity.yml) * [Personalizer](./personalizer/index.yml)
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in
Use the [Calling SDKs](../voice-video-calling/calling-sdk-features.md) to join the room call. Room calls can be joined using the Web, iOS or Android Calling SDKs. You can find quick start samples for joining room calls [here](../../quickstarts/rooms/join-rooms-call.md).
+Rooms can also be accessed using the [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/docs/rooms--page). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code.
+ ## Control access to room calls Rooms can be set to operate in two levels of control over who is allowed to join a room call.
The tables below provide detailed capabilities mapped to the roles. At a high le
## Next steps: - Use the [QuickStart to create, manage and join a room](../../quickstarts/rooms/get-started-rooms.md). - Learn how to [join a room call](../../quickstarts/rooms/join-rooms-call.md).-- Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md).
+- Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md).
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-streaming.md
+
+ Title: Media streaming overview
+
+description: Conceptual information about using Media Streaming APIs with Call Automation.
+++ Last updated : 10/25/2022++++
+# Media streaming overview - audio subscription
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Azure Communication Services provides developers with Media Streaming capabilities to get real-time access to media streams to capture, analyze and process audio content during active calls. In today's world consumption of live audio and video is prevalent, this content could be in the forms of online meetings, online conferences, online schooling, customer support, etc. This consumption has only been exacerbated by the recent events of Covid-19, with many of the worlds work force working remotely from home. With media streaming access, developers can now build server applications to capture and analyze audio streams for each of the participants on the call in real-time. Developers can also combine media streaming with other call automation actions or use their own AI models to analyze audio streams for use cases such as NLP for conversation analysis or provide real-time insights and suggestions to their agents while they are in an active interaction with their end users.
+
+This private preview supports the ability for developers to get access to real-time audio streams over a websocket to analyze each participants audio in mixed and unmixed formats
+
+## Common use cases
+Audio streams can be used in many ways, below are some examples of how developers may wish to use the audio streams in their applications.
+
+### Real-time call assistance
+
+**Improved AI powered suggestions** - Use real-time audio streams of active interactions between agents and customers to gauge the intent of the call and how your agents can provide a better experience to their customer through active suggestions using your own AI model to analyze the call.
+
+### Authentication
+**Biometric authentication** ΓÇô Use the audio streams to carry out authentication using caller biometrics such as voice recognition.
+
+### Interpretations
+**Real-time translation** ΓÇô Use audio streams to send to human or AI translators who can consume this audio content and provide translations.
+
+## Sample architecture for subscribing to audio streams from an ongoing call
+
+[![Screenshot of flow for play action.](./media/media-streaming-flow.png)](./media/media-streaming-flow.png#lightbox)
+
+## Supported formats
+
+### Mixed format
+Contains mixed audio of all participants on the call.
+
+### Unmixed
+Contains audio per participant per channel, with support for up to four channels for four dominant speakers. You will also get a participantRawID that you can use to determine the speaker.
+
+## Additional information
+The table below describes information that will help developers convert the media packets into audible content that can be used by their applications.
+- Framerate: 50 frames per second
+- Packet stream rate: 20 ms rate
+- Data packet: 64 Kbytes
+- Audio metric: 16-bit PCM mono at 16000 hz
+- Public string data is a base64 string that should be converted into a byte array to create raw PCM file. You can then use the following configuration in Audacity to run the file.
+
+## Next Steps
+Check out the [Media Streaming quickstart](../../quickstarts/voice-video-calling/media-streaming.md) to learn more.
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
The service principal of the Contoso application in the Fabrikam tenant is creat
You can see that the status of the Communication Services Teams.ManageCalls and Teams.ManageChats permissions are *Granted for {Directory_name}*. +
+If you run into the issue "The app is trying to access a service '1fd5118e-2576-4263-8130-9503064c837a'(Azure Communication Services) that your organization '{GUID}' lacks a service principal for. Contact your IT Admin to review the configuration of your service subscriptions or consent to the application to create the required service principal." your Azure Active Directory tenant lacks a service principal for the Azure Communication Services application. To fix this issue, use PowerShell as an Azure AD administrator to connect to your tenant. Replace `Tenant_ID` with an ID of your AAD tenancy.
+
+```script
+Connect-AzureAD -TenantId "Tenant_ID"
+```
+If the command is not found, start PowerShell as an administrator and install the Azure AD package.
+
+```script
+Install-Module AzureAD
+```
+Then execute the following command to add a service principal to your tenant. Do not modify the GUID of the App ID.
+
+```script
+New-AzureADServicePrincipal -AppId "1fd5118e-2576-4263-8130-9503064c837a"
+```
++ ## Developer actions The Contoso developer needs to set up the *client application* to authenticate users. The developer then needs to create an endpoint on the back-end *server* to process the Azure AD user token after redirection. When the Azure AD user token is received, it's exchanged for the access token of Teams user and returned to the *client application*.
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md
+
+ Title: Media streaming quickstart
+
+description: Provides a quick start for developers to get audio streams through media streaming APIs from ACS calls.
+++ Last updated : 10/25/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Quickstart: Subscribing to audio streams from an ongoing call
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Get started with using audio streams through Azure Communication Services Media Streaming API. This quickstart assumes you're already familiar with Call Automation APIs to build an automated call routing solution.
++++
+## Message schema
+When ACS has received the URL for your WebSocket server, it will create a connection to it. Once ACS has successfully connected to your WebSocket server, it will send through the first data packet which contains metadata regarding the incoming media packets.
+
+``` code
+/**
+ * The first message upon WebSocket connection will be the metadata packet
+ * which contains the subscriptionId and audio format
+ */
+public class AudioMetadataSample {
+ public string kind; // What kind of data this is, e.g. AudioMetadata, AudioData.
+ public AudioMetadata audioMetadata;
+}
+
+public class AudioMetadata {
+ public string subscriptionId // unique identifier for a subscription request
+ public string encoding; // PCM only supported
+ public int sampleRate; // 16000 default
+ public int channels; // 1 default
+ public int length; // 640 default
+}
+```
+
+## Audio streaming schema
+After sending through the metadata packet, ACS will start streaming audio media to your WebSocket server. Below is an example of what the media object your server will receive looks like.
+
+``` code
+/**
+ * The audio buffer object which is then serialized to JSON format
+ */
+public class AudioDataSample {
+ public string kind; // What kind of data this is, e.g. AudioMetadata, AudioData.
+ public AudioData audioData;
+}
+
+public class AudioData {
+ public string data; // Base64 Encoded audio buffer data
+ public string timestamp; // In ISO 8601 format (yyyy-mm-ddThh:mm:ssZ)
+ public string participantRawID;
+ public boolean silent; // Indicates if the received audio buffer contains only silence.
+}
+```
+
+Example of audio data being streamed
+
+``` json
+{
+ "kind": "AudioData",
+ "audioData": {
+ "timestamp": "2022-10-03T19:16:12.925Z",
+ "participantRawID": "8:acs:3d20e1de-0f28-41c5-84a0-4960fde5f411_0000000b-faeb-c708-99bf-a43a0d0036b0",
+ "data": "5ADwAOMA6AD0AOIA4ADkAN8AzwDUANEAywC+ALQArgC0AKYAnACJAIoAlACWAJ8ApwCiAKkAqgCqALUA0wDWANAA3QDVAN0A8wDzAPAA7wDkANkA1QDPAPIA6QDmAOcA0wDYAPMA8QD8AP0AAwH+AAAB/QAAAREBEQEDAQoB9wD3APsA7gDxAPMA7wDpAN0A6gD5APsAAgEHAQ4BEAETARsBMAFHAUABPgE2AS8BKAErATEBLwE7ASYBGQEAAQcBBQH5AAIBBwEMAQ4BAAH+APYA6gDzAPgA7gDkAOUA3wDcANQA2gDWAN8A3wDcAMcAxwDIAMsA1wDfAO4A3wDUANQA3wDvAOUA4QDpAOAA4ADhAOYA5wDkAOUA1gDxAOcA4wDpAOEA4gD0APoA7wD9APkA6ADwAPIA7ADrAPEA6ADfANQAzQDLANIAzwDaANcA3QDZAOQA4wDXANwA1ADbAOsA7ADyAPkA7wDiAOIA6gDtAOsA7gDeAOIA4ADeANUA6gD1APAA8ADgAOQA5wDgAPgA8ADnAN8A5gDgAOoA6wDcAOgA2gDZANUAyQDPANwA3gDgAO4A8QDyAAQBEwEDAewA+gDpAN4A6wDeAO8A8QDwAO8ABAEKAQUB/gD5AAMBAwEIARoBFAEeARkBDgH8AP0A+gD8APcA+gDrAO0A5wDcANEA0QDHAM4A0wDUAM4A0wDZANQAxgDSAM4A1ADVAOMA4QDhANUA2gDjAOYA5wDrANQA5wDrAMsAxQDWANsA5wDpAOEA4QDFAMoA0QDKAMgAwgDNAMsAwgCwAKkAtwCrAKoAsACgAJ4AlQCeAKAAoQCmAKwApwCsAK0AnQCVAA==",
+ "silent": false
+ }
+}
+```
+
+## Stop audio streaming
+Audio streaming will automatically stop when the call ends or is canceled.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+- Learn more about [Media Streaming](../../concepts/voice-video-calling/media-streaming.md).
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features.
+- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md).
+- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md).
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
This article shows how to configure a private endpoint for your registry using t
[!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)] > [!NOTE]
-> Starting October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry.
+> Starting October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the az acr show-usage command to see the limit for your registry. Please open a support ticket if this limit needs to be increased to 200 private endpoints.
## Prerequisites
container-registry Scan Images Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/scan-images-defender.md
Last updated 10/11/2022
To scan images in your Azure container registries for vulnerabilities, you can integrate one of the available Azure Marketplace solutions or, if you want to use Microsoft Defender for Cloud, optionally enable **Microsoft Defender for container registries** at the subscription level.
-* Learn more about [Microsoft Defender for container registries](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr)
-* Learn more about [container security in Microsoft Defender for Cloud](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-introduction)
+* Learn more about [Microsoft Defender for container registries](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-va-acr)
+* Learn more about [container security in Microsoft Defender for Cloud](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction)
## Registry operations by Microsoft Defender for Cloud
-Microsoft Defender for Cloud scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr#view-and-remediate-findings) appear in Microsoft Defender for Cloud.
+Microsoft Defender for Cloud scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-va-acr#view-and-remediate-findings) appear in Microsoft Defender for Cloud.
After you've taken the recommended steps to remediate the security issue, replace the image in your registry. Microsoft Defender for Cloud rescans the image to confirm that the vulnerabilities are remediated.
-For details, see [Use Microsoft Defender for container registries](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr).
+For details, see [Use Microsoft Defender for container registries](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-va-acr).
> [!TIP] > Microsoft Defender for Cloud authenticates with the registry to pull images for vulnerability scanning. If [resource logs](monitor-service-reference.md#resource-logs) are collected for your registry, you'll see registry login events and image pull events generated by Microsoft Defender for Cloud. These events are associated with an alphanumeric ID such as `b21cb118-5a59-4628-bab0-3c3f0e434cg6`.
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
The most common constructor for **MongoClient** has two parameters:
| Parameter | Example value | Description | | | | |
-| ``url`` | ``COSMOS_CONNECTION_STRIN`` environment variable | API for MongoDB connection string to use for all requests |
+| ``url`` | ``COSMOS_CONNECTION_STRING`` environment variable | API for MongoDB connection string to use for all requests |
| ``options`` | `{ssl: true, tls: true, }` | [MongoDB Options](https://mongodb.github.io/node-mongodb-native/4.5/interfaces/MongoClientOptions.html) for the connection. | Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
cosmos-db Howto Shard Count https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-shard-count.md
+
+ Title: Choose shard count - Azure Cosmos DB for PostgreSQL
+description: Pick the right shard count for distributed tables
+++++ Last updated : 11/01/2022++
+# Choose shard count
++
+Choosing the shard count for each distributed table is a balance between the
+flexibility of having more shards, and the overhead for query planning and
+execution across them. If you decide to change the shard count of a table after
+distributing, you can use the
+[alter_distributed_table](reference-functions.md#alter_distributed_table)
+function.
+
+## Multi-tenant SaaS use case
+
+The optimal choice varies depending on your access patterns for the data. For
+instance, in the Multi-Tenant SaaS Database use-case we recommend choosing
+between **32 - 128** shards. For smaller workloads say <100 GB, you could start with
+32 shards and for larger workloads you could choose 64 or 128. This choice gives you
+the leeway to scale from 32 to 128 worker machines.
+
+## Real-time analytics use case
+
+In the Real-Time Analytics use-case, shard count should be related to the total
+number of cores on the workers. To ensure maximum parallelism, you should create
+enough shards on each node such that there is at least one shard per CPU core.
+We typically recommend creating a high number of initial shards, for example,
+**2x or 4x the number of current CPU cores**. Having more shards allows for
+future scaling if you add more workers and CPU cores.
+
+Keep in mind that, for each query, Azure Cosmos DB for PostgreSQL opens one
+database connection per shard, and that these connections are limited. Be
+careful to keep the shard count small enough that distributed queries wonΓÇÖt
+often have to wait for a connection. Put another way, the connections needed,
+`(max concurrent queries * shard count)`, shouldn't exceed the total
+connections possible in the system, `(number of workers * max_connections per
+worker)`.
+
+## Next steps
+
+- Learn more about cluster [performance options](resources-compute.md).
+- [Scale a cluster](howto-scale-grow.md) up or out
+- [Rebalance shards](howto-scale-rebalance.md)
cosmos-db Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-distribute-tables.md
Previously updated : 10/14/2022 Last updated : 11/01/2022 # Create and distribute tables
provides to distribute tables and use resources across multiple machines. The
function decomposes tables into shards, which can be spread across nodes for increased storage and compute performance.
+> [!NOTE]
+>
+> In real applications, when your workload fits in 64 vCores, 256GB RAM and 2TB
+> storage, you can use a single-node cluster. In this case, distributing tables
+> is optional. Later, you can distribute tables as needed using
+> [create_distributed_table_concurrently](reference-functions.md#create_distributed_table_concurrently).
+ Let's distribute the tables: ```sql
cosmos-db Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-functions.md
Previously updated : 02/24/2022 Last updated : 11/01/2022 # Azure Cosmos DB for PostgreSQL functions
table shards will be moved together unnecessarily in a \"cascade.\"
If a new distributed table isn't related to other tables, it's best to specify `colocate_with => 'none'`.
+**shard\_count:** (Optional) the number of shards to create for the new
+distributed table. When specifying `shard_count` you canΓÇÖt specify a value of
+`colocate_with` other than none. To change the shard count of an existing table
+or colocation group, use the [alter_distributed_table](#alter_distributed_table
+function.
+
+Possible values for `shard_count` are between 1 and 64000. For guidance on
+choosing the optimal value, see [Shard Count](howto-shard-count.md).
+ #### Return Value N/A
SELECT create_distributed_table('github_events', 'repo_id',
colocate_with => 'github_repo'); ```
+### create\_distributed\_table\_concurrently
+
+This function has the same interface and purpose as
+[create_distributed_function](#create_distributed_table), but doesn't block
+writes during table distribution.
+
+However, `create_distributed_table_concurrently` has a few limitations:
+
+* You can't use the function in a transaction block, which means you can only
+ distribute one table at a time. (You *can* use the function on
+ time-partitioned tables, though.)
+* You can't use `create_distributed_table_concurrently` when the table is
+ referenced by a foreign key, or references another local table. However,
+ foreign keys to reference tables work, and you can create foreign keys to other
+ distributed tables after table distribution completes.
+* If you don't have a primary key or replica identity on your table, then
+ update and delete commands will fail during the table distribution due to
+ limitations on logical replication.
+ ### truncate\_local\_data\_after\_distributing\_table Truncate all local rows after distributing a table, and prevent constraints
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
The following reservations aren't eligible for refunds:
- SUSE Linux plans > [!NOTE]
-> - **You must have owner access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
+> - **You must have owner or Reservation administrator access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
> - Microsoft is not currently charging early termination fees for reservation refunds. We might charge the fees for refunds made in the future. We currently don't have a date for enabling the fee. ## How to exchange or refund an existing reservation
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
If all subscriptions are moved out of a management group, the scope of the reser
By default, the following users can view and manage reservations: - The person who bought the reservation and the account owner for the billing subscription get Azure RBAC access to the reservation order.-- Enterprise Agreement and Microsoft Customer Agreement billing contributors can manage all reservations from Cost Management + Billing > Reservation Transactions > select the blue banner.
+- Enterprise Agreement and Microsoft Customer Agreement billing contributors can manage all reservations from Cost Management + Billing > Reservation Transactions > select the blue banner.
+- A Reservation administrator for reservations in their Azure Active Directory (Azure AD) tenant (directory).
+- A Reservation reader has read-only access to reservations in their Azure Active Directory tenant (directory).
To allow other people to manage reservations, you have two options:
data-factory Concepts Annotations User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-annotations-user-properties.md
+
+ Title: Monitor Azure Data Factory and Azure Synapse Analytics pipelines with annotations and user properties
+description: Advanced monitoring with annotations and user properties
++++++ Last updated : 11/01/2022++
+# Monitor Azure Data Factory and Azure Synapse Analytics pipelines with annotations and user properties
++
+When monitoring your data pipelines, you may want to be able to filter and monitor a certain group of activities, such as those of a project or specific department's pipelines. You may also need to further monitor activities based on dynamic properties. You can achieve these things by leveraging annotations and user properties.
+
+## Annotations
+
+Azure Data Factory annotations are tags that you can add to your Azure Data Factory or Azure Synapse Analytics entities to easily identify them. An annotation allows you to classify or group different entities in order to easily monitor or filter them after an execution. Annotations only allow you to define static values and can be added to pipelines, datasets, linked services and triggers.
+
+## User properties
+
+User properties are key-value pairs defined at the activity level. By adding user properties, you can view additional information about activities under activity runs window that may help you to monitor your activity executions.
+User properties allow you to define dynamic values and can be added to any activity, up to 5 per activity, under User Properties tab.
+
+## Create and use annotations and user properties
+
+As we discussed, annotations are static values that you can assign to pipelines, datasets, linked services, and triggers. Let's assume you want to filter for pipelines that belong to the same business unit or project name. We will first create the annotation. Click on the Properties icon, + New button and name your annotation appropriately. We advise being consistent with your naming.
+
+![Screenshot showing how to create an annotation.](./media/concepts-annotations-user-properties/create-annotations.png "Create Annotation")
+
+When you go to the Monitor tab, you can filter under Pipeline runs for this Annotation:
+
+![Screenshot showing how to monitor an annotations.](./media/concepts-annotations-user-properties/monitor-annotations.png "Monitor Annotations")
+
+If you want to monitor for dynamic values at the activity level, you can do so by leveraging the User properties. You can add these under any activity by clicking on the Activity box, User properties tab and the + New button:
+
+![Screenshot showing how to create user properties.](./media/concepts-annotations-user-properties/create-user-properties.png "Create User Properties")
+
+For Copy Activity specifically, you can auto-generate these:
+
+![Screenshot showing User Properties under Copy activity.](./media/concepts-annotations-user-properties/copy-activity-user-properties.png "Copy Activity User Properties")
+
+To monitor User properties, go to the Activity runs monitoring view. Here you will see all the properties you added.
+
+![Screenshot showing how to use User Properties in the Monitor tab.](./media/concepts-annotations-user-properties/monitor-user-properties.png "Monitor User Properties")
+
+You can remove some from the view if you click on the Bookmark sign:
+
+![Screenshot showing how to remove User Properties.](./media/concepts-annotations-user-properties/remove-user-properties.png "Remove User Properties")
+
+## Next steps
+
+To learn more about monitoring see [Visually monitor Azure Data Factory.](./monitor-visually.md)
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md
Previously updated : 08/02/2022 Last updated : 11/01/2022 # Copy data from Salesforce Marketing Cloud using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources/sinks, see the [Supporte
The Salesforce Marketing Cloud connector supports OAuth 2 authentication, and it supports both legacy and enhanced package types. The connector is built on top of the [Salesforce Marketing Cloud REST API](https://developer.salesforce.com/docs/atlas.en-us.mc-apis.meta/mc-apis/index-api.htm). >[!NOTE]
->This connector doesn't support retrieving custom objects or custom data extensions.
+>This connector doesn't support retrieving views, custom objects or custom data extensions.
## Getting started
data-factory Pricing Examples Copy Transform Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-azure-databricks.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule for 30 days.
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule for 8 hours per day for 30 days.
The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 3 Activity runs per execution (1 for trigger run, 2 for activity runs) |
-| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity Execution |
+| Run Pipeline | 3 Activity runs **per execution** (1 for trigger run, 2 for activity runs) = 720 activity runs, rounded up since the calculator only allows increments of 1000. |
+| Copy Data Assumption: DIU hours **per execution** = 10 min | 10 min \ 60 min \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Databricks activity Assumption: external execution hours **per execution** = 10 min | 10 min \ 60 min External Pipeline Activity Execution |
## Pricing calculator example
-**Total scenario pricing for 30 days: $122.03**
+**Total scenario pricing for 30 days: $41.01**
:::image type="content" source="media/pricing-concepts/scenario-2-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for a copy data and transform with Azure Databricks scenario." lightbox="media/pricing-concepts/scenario-2-pricing-calculator.png":::
data-factory Pricing Examples Copy Transform Dynamic Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-dynamic-parameters.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule.
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule for 8 hours each day over 30 days.
The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
To accomplish the scenario, you need to create a pipeline with the following ite
- One copy activity with an input dataset for the data to be copied from AWS S3, an output dataset for the data on Azure storage. - One Lookup activity for passing parameters dynamically to the transformation script. - One Azure Databricks activity for the data transformation.-- One schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+- One schedule trigger to execute the pipeline every hour for 8 hours per day. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
:::image type="content" source="media/pricing-concepts/scenario3.png" alt-text="Diagram shows a pipeline with a schedule trigger. In the pipeline, copy activity flows to an input dataset, an output dataset, and lookup activity that flows to a DataBricks activity, which runs on Azure Databricks. The input dataset flows to an AWS S3 linked service. The output dataset flows to an Azure Storage linked service.":::
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 4 Activity runs per execution (1 for trigger run, 3 for activity runs) |
-| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Execute Lookup activity Assumption: execution time per run = 1 min | 1 min Pipeline Activity execution |
-| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity execution |
+| Run Pipeline | 4 Activity runs **per execution** (1 for trigger run, 3 for activity runs) = 960 activity runs, rounded up since the calculator only allows increments of 1000. |
+| Copy Data Assumption: DIU hours **per execution** = 10 min | 10 min \ 60 min \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Lookup activity Assumption: pipeline activity hours **per execution** = 1 min | 1 min / 60 min Pipeline Activity execution |
+| Execute Databricks activity Assumption: external execution hours **per execution** = 10 min | 10 min / 60 min External Pipeline Activity execution |
## Pricing example: Pricing calculator example
-**Total scenario pricing for 30 days: $122.09**
+**Total scenario pricing for 30 days: $41.03**
:::image type="content" source="media/pricing-concepts/scenario-3-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for a copy data and transform with dynamic parameters scenario." lightbox="media/pricing-concepts/scenario-3-pricing-calculator.png":::
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule. We'll calculate the price for 30 days. You'll do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
+In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule for 8 hours per day. We'll calculate the price for 30 days. You'll do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
To accomplish the scenario, you need to create two pipelines with the following
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 6 Activity runs per execution (2 for trigger run, 4 for activity runs) |
-| Execute Delete Activity: each execution time = 5 min. If the Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity|
-| Copy Data Assumption: each execution time = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Run Pipeline | 6 Activity runs **per execution** (2 for trigger runs, 4 for activity runs) = 1440, rounded up since the calculator only allows increments of 1000.|
+| Execute Delete Activity: pipeline execution time **per execution** = 7 min. If the Delete Activity execution in the first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC. | Total 7 min / 60 min \* 240 montly executions = 28 pipeline activity execution hours in Managed VNET. Pipeline activity supports up to 50 concurrent executions in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity. |
+| Copy Data Assumption: DIU execution time **per execution** = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 min \ 60 min * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
## Pricing calculator example
-**Total scenario pricing for 30 days: $129.02**
+**Total scenario pricing for 30 days: $42.14**
:::image type="content" source="media/pricing-concepts/scenario-5-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for data integration with Managed VNET." lightbox="media/pricing-concepts/scenario-5-pricing-calculator.png":::
data-factory Pricing Examples Get Delta Data From Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-get-delta-data-from-sap-ecc.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to get delta changes from one table in SAP ECC via SAP CDC connector, do a few necessary transforms in flight, and then write data to Azure Data Lake Gen2 storage in ADF mapping dataflow daily.
+In this scenario, you want to get delta changes from one table in SAP ECC via SAP CDC connector, do a few necessary transforms in flight, and then write data to Azure Data Lake Gen2 storage in ADF mapping dataflow daily. We will calculate prices for execution on a schedule once per hour for 8 hours over 30 days.
The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
Assuming every time it requires 15 minutes to complete the job, the cost estimat
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity run) |
-| Data Flow: execution time per run = 15 mins | 15 min * 8 cores of General Compute |
-| Self-Hosted Integration Runtime: execution time per run = 15 mins | 15 min * $0.10/hour (Data Movement Activity on Self-Hosted Integration Runtime Price) |
+| Run Pipeline | 2 Activity runs **per execution** (1 for trigger run, 1 for activity run) = 480, rounded up since the calculator only allows increments of 1000. |
+| Data Flow: execution hours of general compute with 8 cores **per execution** = 15 mins | 15 min / 60 min |
+| Self-Hosted Integration Runtime: data movement execution hours60 **per execution** = 15 mins | 15 min / 60 min |
## Pricing calculator example
-**Total scenario pricing for 30 days: $17.21**
+**Total scenario pricing for 30 days: $138.66**
:::image type="content" source="media/pricing-concepts/scenario-6-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for getting delta data from SAP ECC via SAP CDC in mapping data flows." lightbox="media/pricing-concepts/scenario-6-pricing-calculator.png":::
data-factory Pricing Examples S3 To Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-s3-to-blob.md
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 2 Activity runs per execution (1 for the trigger to run, 1 for activity to run) |
-| Copy Data Assumption: execution hours **per run** | 0.5 hours \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Total execution hours: 8 runs per day for 30 days | 240 runs * 2 DIU/run = 480 DIUs |
+| Run Pipeline | 2 Activity runs **per execution** (1 for the trigger to run, 1 for activity to run) = 480 Activity runs, rounded up since the calculator only allows increments of 1000. |
+| Copy Data Assumption: DIU hours **per execution** | 0.5 hours \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Total execution hours: 8 executions per day for 30 days | 240 executions * 2 DIU/run = 480 DIUs |
## Pricing calculator example
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule for 30 days.
+In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule for 8 hours per day over 30 days.
The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | |
-| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity runs) |
-| Data Flow Assumptions: execution time per run = 10 min + 10 min TTL | 10 \* 16 cores of General Compute with TTL of 10 |
+| Run Pipeline | 2 Activity runs **per execution** (1 for trigger run, 1 for activity runs) = 480 activity runs, rounded up since the calculator only allows increments of 1000. |
+| Data Flow Assumptions: General purpose 16 vCore hours **per execution** = 10 min + 10 min TTL | 20 min \ 60 min |
## Pricing calculator example
-**Total scenario pricing for 30 days: $1051.28**
+**Total scenario pricing for 30 days: $350.76**
:::image type="content" source="media/pricing-concepts/scenario-4a-pricing-calculator.png" alt-text="Screenshot of the orchestration section of the pricing calculator configured to transform data in a blob store with mapping data flows." lightbox="media/pricing-concepts/scenario-4a-pricing-calculator.png":::
databox Data Box Disk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md
Previously updated : 10/07/2021 Last updated : 10/11/2022
Here is a list of the supported storage types for the Data Box Disk.
| General-purpose v2 Standard<sup>*</sup> | Hot, Cool | | General-purpose v2 Premium | | | Blob storage account | |
+| Block Blob storage Premium | |
<sup>*</sup> *Azure Data Lake Storage Gen2 (ADLS Gen2) is supported.*
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
description: Learn the key concepts behind Azure Deployment Environments.
--++ Last updated 10/12/2022
deployment-environments Concept Environments Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-scenarios.md
description: Learn about scenarios enabled by Azure Deployment Environments.
--++ Last updated 10/12/2022 # Scenarios for using Azure Deployment Environments Preview
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
Title: Configure a Catalog Item in Azure Deployment Environments description: This article helps you configure a Catalog Item in GitHub repo or Azure DevOps repo. -+ Last updated 10/12/2022 -+ # Configure a Catalog Item in GitHub repo or Azure DevOps repo
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
description: Learn how to configure a catalog in your dev center to provide curated infra-as-code templates to your development teams to deploy self-serve environments. --++ Last updated 10/12/2022
deployment-environments How To Configure Devcenter Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-devcenter-environment-types.md
description: Learn how to configure dev center environment types to define the types of environments that your developers can deploy. --++ Last updated 10/12/2022
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
description: Learn how to configure a managed identity that'll be used to deploy environments. --++ Last updated 10/12/2022
deployment-environments How To Configure Project Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md
description: Learn how to configure environment types to define deployment settings and permissions available to developers when deploying environments in a project. --++ Last updated 10/12/2022
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
description: Learn how to setup and use Deployment Environments Azure CLI extension to configure the Azure Deployment environments service. --++ Last updated 10/26/2022
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
--++ Last updated 10/12/2022
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Title: Create and access Environments description: This quickstart shows you how to create and access environments in an Azure Deployment Environments Project.--++
Run the following steps in Azure CLI to create an Environment and configure reso
az devcenter dev catalog-item list --dev-center <name> --project-name <name> -o table ```
-1. Create an environment by using a *catalog-item* ('infra-as-code' template) from the list of available catalog items.
+1. Create an environment by using a *catalog-item* ('infra-as-code' template defined in the [manifest.yaml](configure-catalog-item.md#add-a-new-catalog-item) file) from the list of available catalog items.
```azurecli az devcenter dev environment create --dev-center-name <devcenter-name> --project-name <project-name> -n <name> --environment-type <environment-type-name> --catalog-item-name <catalog-item-name> catalog-name <catalog-name> ```
- If the specific *catalog-item* requires any parameters use `--deployment-parameters` and provide the parameters as a json-string or json-file, for example:
+ If the specific *catalog-item* requires any parameters use `--parameters` and provide the parameters as a json-string or json-file, for example:
```json $params = "{ 'name': 'firstMsi', 'location': 'northeurope' }" az devcenter dev environment create --dev-center-name <devcenter-name> --project-name <project-name> -n <name> --environment-type <environment-type-name>
- --catalog-item-name <catalog-item-name> catalog-name <catalog-name>
+ --catalog-item-name <catalog-item-name> --catalog-name <catalog-name>
--parameters $params ```
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Title: Configure the Azure Deployment Environments service description: This quickstart shows you how to configure the Azure Deployment Environments service. You'll create a dev center, attach an identity, attach a catalog, and create environment types.--++
After you've created a dev center, the next step is to attach an [identity](conc
|**Name**|Provide a name for your catalog.| |**Git clone URI**|Provide the URI to your GitHub or ADO repository.| |**Branch**|Provide the repository branch that you would like to connect.|
- |**Folder path**|Provide the repo path in which the [catalog item](concept-environments-key-concepts.md#catalog-items) exist.|
+ |**Folder path**|Provide the repo relative path in which the [catalog item](concept-environments-key-concepts.md#catalog-items) exist.|
|**Secret identifier**|Provide the secret identifier that which contains your Personal Access Token (PAT) for the repository| :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-new-catalog-form.png" alt-text="Screenshot of add new catalog page."::: 1. Confirm that the catalog is successfully added by checking the **Notifications**.
+1. Select the specific repo, and then select **Sync**.
+
+ :::image type="content" source="media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot showing how to sync the catalog." :::
+ ## Create Environment types Environment types help you define the different types of environments your development teams can deploy. You can apply different settings per environment type.
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Title: Set up an Azure Deployment Environments Project description: This quickstart shows you how to create and configure an Azure Deployment Environments project and associate it with a dev center.--++
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
+
+ Title: How to manage a dev box project
+
+description: This article describes how to create, and delete Microsoft Dev Box Preview dev box projects.
++++ Last updated : 10/26/2022+++
+<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev box projects so that I can provide appropriate dev boxes to my users. -->
+
+# Manage a dev box project
+A project is the point of access to Microsoft Dev Box Preview for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships.
+
+Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically.
+
+## Project admins
+
+Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this quickstart can be performed by project admins.
+
+To learn how to add a user to the Project Admin role, see [Provide access to a dev box project](#provide-access-to-a-dev-box-project).
++
+## Permissions
+To manage a dev box project, you need the following permissions:
+
+|Action|Permission required|
+|--|--|
+|Create or delete dev box project|Owner, Contributor, or Write permissions on the dev center in which you want to create the project. |
+|Update a dev box project|Owner, Contributor, or Write permissions on the project.|
+|Create, delete, and update dev box pools in the project|Owner, Contributor, or DevCenter Project Admin.|
+|Manage a dev box within the project|Owner, Contributor, or DevCenter Project Admin.|
+|Add a dev box user to the project|Owner permissions on the project.|
+
+## Create a dev box project
+
+The following steps show you how to create and configure a project in dev box.
+
+1. In the [Azure portal](https://portal.azure.com), in the search box, type *Projects* and then select **Projects** from the list.
+
+1. On the Projects page, select **+Create**.
+
+1. On the **Create a project** page, on the **Basics** tab, enter the following values:
+
+ |Name|Value|
+ |-|-|
+ |**Subscription**|Select the subscription in which you want to create the project.|
+ |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|
+ |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings will be applied to the project.|
+ |**Name**|Enter a name for your project. |
+ |**Description**|Enter a brief description of the project. |
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-project-create.png" alt-text="Screenshot of the Create a dev box project basics tab.":::
+
+1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign.
+
+1. Select **Review + Create**.
+
+1. On the **Review** tab, select **Create**.
+
+1. Confirm that the project is created successfully by checking the notifications. Select **Go to resource**.
+
+1. Verify that you see the **Project** page.
+## Delete a dev box project
+You can delete a dev box project when you're no longer using it. Deleting a project is permanent and cannot be undone. You cannot delete a project that has dev box pools associated with it.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *Projects* and then select **Projects** from the list.
+
+1. Open the project you want to delete.
+
+1. Select the dev box project you want to delete and then select **Delete**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/delete-project.png" alt-text="Screenshot of the list of existing dev box pools, with the one to be deleted selected.":::
+
+1. In the confirmation message, select **Confirm**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/confirm-delete-project.png" alt-text="Screenshot of the Delete dev box pool confirmation message.":::
++
+## Provide access to a dev box project
+Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *Projects* and then select **Projects** from the list.
+
+1. Select the project you want to provide your team members access to.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+
+1. Select **Access Control (IAM)** from the left menu.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/access-control-tab.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted.":::
+
+1. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted.":::
+
+1. On the Add role assignment page, search for *devcenter dev box user*, select the **DevCenter Dev Box User** built-in role, and then select **Next**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-user-role.png" alt-text="Screenshot showing the Add role assignment search box highlighted.":::
+
+1. On the Members page, select **+ Select Members**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-user-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted.":::
+
+1. On the **Select members** pane, select the Active Directory Users or Groups you want to add, and then select **Select**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted.":::
+
+1. On the Add role assignment page, select **Review + assign**.
+
+The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal).
+
+## Next steps
+
+- [Manage dev box pools](./how-to-manage-dev-box-pools.md)
+- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
digital-twins Reference Query Clause Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-clause-match.md
description: Reference documentation for the Azure Digital Twins query language MATCH clause Previously updated : 05/11/2022 Last updated : 11/01/2022
This clause is optional while querying.
## Core syntax: MATCH
-`MATCH` supports any query that finds a path between twins with an unpredictable number of hops, based on certain relationship conditions.
+`MATCH` supports any query that finds a path between twins within a range of hops, based on certain relationship conditions.
The relationship condition can include one or more of the following details: * [Relationship direction](#specify-relationship-direction) (left-to-right, right-to-left, or non-directional)
A query with a `MATCH` clause must also use the [WHERE clause](reference-query-c
Here's the basic `MATCH` syntax.
-The placeholder values shown in the `MATCH` clause that should be replaced with your values are `twin_1`, `relationship_condition`, and `twin_2`. The placeholder values in the `WHERE` clause that should be replaced with your values are `twin_or_twin_collection` and `twin_ID`.
+It contains these placeholders:
+* `twin_or_twin_collection` (x2): The `MATCH` clause requires one operand to represent a single twin. The other operand can represent another single twin, or a collection of twins.
+* `relationship_condition`: In this space, define a condition that describes the relationship between the twins or twin collections. The condition can [specify relationship direction](#specify-relationship-direction), [specify relationship name](#specify-relationship-name), [specify number of hops](#specify-number-of-hops), [specify relationship properties](#assign-query-variable-to-relationship-and-specify-relationship-properties), or [any combination of these options](#combine-match-operations).
+* `twin_ID`: Here, specify a `$dtId` within one of the twin collections so that one of the operands represents a single twin.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchSyntax":::
-You can leave out the name of one of the twins in order to allow any twin name to work in that spot.
+You can leave one of the twin collections blank in order to allow any twin to work in that spot.
You can also change the number of relationship conditions, to have multiple [chained](#combine-match-operations) relationship conditions or no relationship condition at all:
Use the relationship condition in the `MATCH` clause to specify a relationship d
Directional relationship descriptions use a visual depiction of an arrow to indicate the direction of the relationship. The arrow includes a space set aside by square brackets (`[]`) for an optional [relationship name](#specify-number-of-hops).
-This section shows the syntax for different directions of relationships. The placeholder values that should be replaced with your values are `source_twin` and `target_twin`.
+This section shows the syntax for different directions of relationships. The placeholder values that should be replaced with your values are `source_twin_or_twin_collection` and `target_twin_or_twin_collection`.
For a *left-to-right* relationship, use the following syntax.
If you don't provide a relationship name, the query will include all relationshi
Specify the name of a relationship to traverse in the `MATCH` clause within square brackets (`[]`). This section shows the syntax of specifying named relationships.
-For a single name, use the following syntax. The placeholder values that should be replaced with your values are `twin_1`, `relationship_name`, and `twin_2`.
+For a single name, use the following syntax. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1`, `relationship_name`, and `twin_or_twin_collection_2`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchNameSingleSyntax":::
-For multiple possible names use the following syntax. The placeholder values that should be replaced with your values are `twin_1`, `relationship_name_option_1`, `relationship_name_option_2`, `twin_2`, and the note to continue the pattern as needed for the number of relationship names you want to enter.
+For multiple possible names use the following syntax. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1`, `relationship_name_option_1`, `relationship_name_option_2`, `twin_or_twin_collection_2`, and the note to continue the pattern as needed for the number of relationship names you want to enter.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchNameMultiSyntax":::
If you don't provide a number of hops, the query will default to one hop.
Specify the number of hops to traverse in the `MATCH` clause within the square brackets (`[]`).
-To specify an exact number of hops, use the following syntax. The placeholder values that should be replaced with your values are `twin_1`, `number_of_hops`, and `twin_2`.
+To specify an exact number of hops, use the following syntax. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1`, `number_of_hops`, and `twin_or_twin_collection_2`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchHopsExactSyntax":::
-To specify a range of hops, use the following syntax. The placeholder values that should be replaced with your values are `twin_1`, `starting_limit`, `ending_limit` and `twin_2`. The starting limit isn't included in the range, while the ending limit is included.
+To specify a range of hops, use the following syntax. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1`, `starting_limit`, `ending_limit` and `twin_or_twin_collection_2`. The starting limit isn't included in the range, while the ending limit is included.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchHopsRangeSyntax":::
A useful result of doing this is the ability to filter on relationship propertie
>[!NOTE] >The examples in this section focus on a query variable for the relationship. They all show non-directional relationships without specifying names. For instructions on how to do more with these other conditions, see [Specify relationship direction](#specify-relationship-direction) and [Specify relationship name](#specify-relationship-name). For information about how to use several of these together in the same query, see [Combine MATCH operations](#combine-match-operations).
-To assign a query variable to the relationship, put the variable name in the square brackets (`[]`). The placeholder values shown below that should be replaced with your values are `twin_1`, `relationship_variable`, and `twin_2`.
+To assign a query variable to the relationship, put the variable name in the square brackets (`[]`). The placeholder values shown below that should be replaced with your values are `twin_or_twin_collection_1`, `relationship_variable`, and `twin_or_twin_collection_2`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchVariableSyntax":::
In a single query, you can combine [relationship direction](#specify-relationshi
The following syntax examples show how these attributes can be combined. You can also leave out any of the optional details shown in placeholders to omit that part of the condition.
-To specify relationship direction, relationship name, and number of hops within a single query, use the following syntax within the relationship condition. The placeholder values that should be replaced with your values are `twin_1` and `twin_2`, `optional_left_angle_bracket` and `optional_right_angle_bracket`, `relationship_name(s)`, and `number_of_hops`.
+To specify relationship direction, relationship name, and number of hops within a single query, use the following syntax within the relationship condition. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1` and `twin_or_twin_collection_2`, `optional_left_angle_bracket` and `optional_right_angle_bracket`, `relationship_name(s)`, and `number_of_hops`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchCombinedHopsSyntax":::
-To specify relationship direction, relationship name, and a query variable for the relationship within a single query, use the following syntax within the relationship condition. The placeholder values that should be replaced with your values are `twin_1` and `twin_2`, `optional_left_angle_bracket` and `optional_right_angle_bracket`, `relationship_variable`, and `relationship_name(s)`.
+To specify relationship direction, relationship name, and a query variable for the relationship within a single query, use the following syntax within the relationship condition. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1` and `twin_or_twin_collection_2`, `optional_left_angle_bracket` and `optional_right_angle_bracket`, `relationship_variable`, and `relationship_name(s)`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchCombinedVariableSyntax"::: >[!NOTE] >As per the options for [specifying relationship direction](#specify-relationship-direction), you must pick between a left angle bracket for a left-to-right relationship or a right angle bracket for a right-to-left relationship. You can't include both on the same arrow, but can represent bi-directional relationships by chaining.
-You can chain multiple relationship conditions together, like this. The placeholder values that should be replaced with your values are `twin_1`, all instances of `relationship_condition`, and `twin_2`.
+You can chain multiple relationship conditions together, like this. The placeholder values that should be replaced with your values are `twin_or_twin_collection_1`, all instances of `relationship_condition`, and `twin_or_twin_collection_2`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MatchChainSyntax":::
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 09/27/2022 Last updated : 10/31/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
The following restrictions hold with respect to virtual networks:
### Subnet restrictions Subnets used for DNS resolver have the following limitations:
+- The following IP address space is reserved and can't be used for the DNS resolver service: 10.0.1.0 - 10.0.16.255.
+ - Do not use these class C networks or subnets within these networks for DNS resolver subnets: 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, 10.0.4.0/24, 10.0.5.0/24, 10.0.6.0/24, 10.0.7.0/24, 10.0.8.0/24, 10.0.9.0/24, 10.0.10.0/24, 10.0.11.0/24, 10.0.12.0/24, 10.0.13.0/24, 10.0.14.0/24, 10.0.15.0/24, 10.0.16.0/24.
- A subnet must be a minimum of /28 address space or a maximum of /24 address space. - A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint. - All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed.
Outbound endpoints have the following limitations:
### Ruleset restrictions -- Rulesets can have up to 1000 rules.
+- Rulesets can have up to 25 rules.
### Other restrictions
expressroute Expressroute Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-connectivity-models.md
Previously updated : 10/07/2020 Last updated : 10/31/2022 # ExpressRoute connectivity models
-You can create a connection between your on-premises network and the Microsoft cloud in four different ways, [CloudExchange Co-location](#CloudExchange), [Point-to-point Ethernet Connection](#Ethernet), [Any-to-any (IPVPN) Connection](#IPVPN), and [ExpressRoute Direct](#Direct). Connectivity providers may offer one or more connectivity models. You can work with your connectivity provider to pick the model that works best for you.
+
+ExpressRoute allows you to create a connection between your on-premises network and the Microsoft cloud in four different ways, [CloudExchange Co-location](#CloudExchange), [Point-to-point Ethernet Connection](#Ethernet), [Any-to-any (IPVPN) Connection](#IPVPN), and [ExpressRoute Direct](#Direct). Connectivity providers may offer more than one connectivity models. You can work with your connectivity provider to pick the model that works best for you.
<br><br> :::image type="content" source="./media/expressroute-connectivity-models/expressroute-connectivity-models-diagram.png" alt-text="ExpressRoute connectivity model diagram"::: ## <a name="CloudExchange"></a>Co-located at a cloud exchange
-If you're co-located in a facility with a cloud exchange, you can order virtual cross-connections to the Microsoft cloud through the co-location providerΓÇÖs Ethernet exchange. Co-location providers can offer either Layer 2 cross-connections, or managed Layer 3 cross-connections between your infrastructure in the co-location facility and the Microsoft cloud.
+
+If you're co-located in a facility with a cloud exchange, you can request for virtual cross-connections to the Microsoft cloud through the co-location providerΓÇÖs Ethernet exchange. Co-location providers can offer either Layer 2 cross-connections, or managed Layer 3 cross-connections between your infrastructure in the co-location facility and the Microsoft cloud.
## <a name="Ethernet"></a>Point-to-point Ethernet connections
-You can connect your on-premises datacenters/offices to the Microsoft cloud through point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2 connections, or managed Layer 3 connections between your site and the Microsoft cloud.
+
+You can connect your on-premises datacenters or offices to the Microsoft cloud through point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2 connections, or managed Layer 3 connections between your site and the Microsoft cloud.
## <a name="IPVPN"></a>Any-to-any (IPVPN) networks
-You can integrate your WAN with the Microsoft cloud. IPVPN providers (typically MPLS VPN) offer any-to-any connectivity between your branch offices and datacenters. The Microsoft cloud can be interconnected to your WAN to make it look just like any other branch office. WAN providers typically offer managed Layer 3 connectivity. ExpressRoute capabilities and features are all identical across all of the above connectivity models.
+
+You can integrate your WAN with the Microsoft cloud. IPVPN providers (typically MPLS VPN) offer any-to-any connectivity between your branch offices and datacenters. The Microsoft cloud can be interconnected to your WAN to make it appear like any other branch office. WAN providers typically offer managed Layer 3 connectivity. ExpressRoute capabilities and features are all identical across all of the above connectivity models.
## <a name="Direct"></a>Direct from ExpressRoute sites
-You can connect directly into the Microsoft's global network at a peering location strategically distributed across the world. [ExpressRoute Direct](expressroute-erdirect-about.md) provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.
+
+You can connect directly into the Microsoft global network at a peering location strategically distributed across the world. [ExpressRoute Direct](expressroute-erdirect-about.md) provides dual 100-Gbps or 10-Gbps connectivity that supports Active/Active connectivity at scale.
## Next steps * Learn about ExpressRoute connections and routing domains. See [ExpressRoute circuits and routing domains](expressroute-circuit-peerings.md).
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-erdirect-about.md
Previously updated : 08/31/2021 Last updated : 10/31/2022 # About ExpressRoute Direct
-ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale. You can work with any service provider for ER Direct.
+ExpressRoute Direct gives you the ability to connect directly into the Microsoft global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100-Gbps or 10-Gbps connectivity, that supports Active/Active connectivity at scale. You can work with any service provider to set up ExpressRoute Direct.
-Key features that ExpressRoute Direct provides include, but aren't limited to:
+Key features that ExpressRoute Direct provides include, but not limited to:
-* Massive data ingestion into services like Azure Storage and Azure Cosmos DB
-* Physical isolation for industries that are regulated and require dedicated and isolated connectivity like: Banking, Government, and Retail
-* Granular control of circuit distribution based on business unit
+* Large data ingestion into services like Azure Storage and Azure Cosmos DB.
+* Physical isolation for industries that regulates and require dedicated or isolated connectivity such as banks, government, and retail companies.
+* Granular control of circuit distribution based on business unit.
## Onboard to ExpressRoute Direct
-Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, run the following commands using Azure PowerShell:
+Before you can set up ExpressRoute Direct, you must first enroll your subscription. Run the following commands using Azure PowerShell:
1. Sign in to Azure and select the subscription you wish to enroll.
Before using ExpressRoute Direct, you must first enroll your subscription. To en
Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>" ```
-1. Register your subscription for Public Preview using the following command:
+1. Register your subscription to **AllowExpressRoutePorts** using the following command:
```azurepowershell-interactive Register-AzProviderFeature -FeatureName AllowExpressRoutePorts -ProviderNamespace Microsoft.Network
Once enrolled, verify that **Microsoft.Network** resource provider is registered
1. In your subscription, for **Resource Providers**, verify **Microsoft.Network** provider shows a **Registered** status. If the Microsoft.Network resource provider isn't present in the list of registered providers, add it.
-If you begin to use ExpressRoute Direct and notice that there are no available ports in your chosen peering location, please log a support request to request more inventory.
+When you start using ExpressRoute Direct and notice that there aren't any available ports for your chosen peering location, submit a support request to request for more inventory.
## ExpressRoute using a service provider and ExpressRoute Direct
-| **ExpressRoute using a service provider** | **ExpressRoute Direct** |
+| ExpressRoute using a service provider | ExpressRoute Direct |
| | |
-| Uses service providers to enable fast onboarding and connectivity into existing infrastructure | Requires 100 Gbps/10 Gbps infrastructure and full management of all layers
-| Integrates with hundreds of providers including Ethernet and MPLS | Direct/Dedicated capacity for regulated industries and massive data ingestion |
-| Circuits SKUs from 50 Mbps to 10 Gbps | Customer may select a combination of the following circuit SKUs on 100-Gbps ExpressRoute Direct: <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> Customer may select a combination of the following circuit SKUs on 10-Gbps ExpressRoute Direct:<ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
-| Optimized for single tenant | Optimized for single tenant with multiple business units and multiple work environments
+| Uses a service provider to enable fast onboarding and connectivity into existing infrastructure | Requires 100-Gbps or 10-Gbps infrastructure and full management of all layers |
+| Integrates with hundreds of providers including Ethernet and MPLS | Direct and Dedicated capacity for regulated industries and large data ingestion |
+| Circuits SKUs ranging from 50 Mbps to 10 Gbps | Customer may select a combination of the following circuit SKUs on 100-Gbps ExpressRoute Direct: <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> Customer may select a combination of the following circuit SKUs on 10-Gbps ExpressRoute Direct:<ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
+| Optimized for a single tenant | Optimized for single tenant with multiple business units and multiple work environments
## ExpressRoute Direct circuits
-Microsoft Azure ExpressRoute allows you to extend your on-premises network into the Microsoft cloud over a private connection made easier by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365.
+Azure ExpressRoute allows you to extend your on-premises network into the Microsoft cloud over a private connection made possible through a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365.
-Each peering location has access to MicrosoftΓÇÖs global network and can access any region in a geopolitical zone by default. You can access all global regions with a premium circuit.
+Each peering location has access to the Microsoft global network and can access any region in a geopolitical zone by default. You can access any global regions when you set up a premium circuit.
-The functionality in most scenarios is equivalent to circuits that use an ExpressRoute service provider to operate. To support further granularity and new capabilities offered using ExpressRoute Direct, there are certain key capabilities that exist on ExpressRoute Direct Circuits.
+The functionality in most scenarios is equivalent to circuits that use an ExpressRoute service provider to operate. To support further granularity and new capabilities offered using ExpressRoute Direct, there are certain key capabilities that exist only with ExpressRoute Direct circuits.
## Circuit SKUs
-ExpressRoute Direct supports massive data ingestion scenarios into Azure storage and other big data services. ExpressRoute circuits on 100-Gbps ExpressRoute Direct now also support **40 Gbps** and **100 Gbps** circuit SKUs. The physical port pairs are **100 Gbps or 10 Gbps** only and can have multiple virtual circuits. Circuit sizes:
+ExpressRoute Direct supports large data ingestion scenarios into services such as Azure storage. ExpressRoute circuits with 100-Gbps ExpressRoute Direct also support **40 Gbps** and **100 Gbps** circuit bandwidth. The physical port pairs are **100 Gbps or 10 Gbps** only and can have multiple virtual circuits.
-| **100-Gbps ExpressRoute Direct** | **10-Gbps ExpressRoute Direct** |
+### Circuit sizes
+
+| 100-Gbps ExpressRoute Direct | 10-Gbps ExpressRoute Direct |
| | |
-| **Subscribed Bandwidth**: 200 Gbps | **Subscribed Bandwidth**: 20 Gbps |
+| Subscribed Bandwidth: 200 Gbps | Subscribed Bandwidth: 20 Gbps |
| <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> | <ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
-> You are able to provision logical ExpressRoute circuits on top of your chosen ExpressRoute Direct resource (10G/100G) up to the Subscribed Bandwidth (20G/200G). E.g. You can provision two 10G ExpressRoute circuits within a single 10G ExpressRoute Direct resource (port pair). Today you must use Azure CLI or PowerShell when configuring circuits that over-subscribe the ExpressRoute Direct resource.
+> [!NOTE]
+> You can provision logical ExpressRoute circuits on top of your selected ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20Gbps or 200Gbps. For example,you can provision two 10 Gbps ExpressRoute circuits within a single 10 Gbps ExpressRoute Direct resource (port pair). Configuring circuits that over-subscribe the ExpressRoute Direct resource is only avialable with Azure PowerShell and Azure CLI.
## Technical Requirements
ExpressRoute Direct supports massive data ingestion scenarios into Azure storage
ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.
-* **QinQ VLAN Tagging** allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically gives an S-Tag at circuit creation and cannot be changed. Each peering on the circuit (Private and Microsoft) will use a unique C-Tag as the VLAN. The C-Tag isn't required to be unique across circuits on the ExpressRoute Direct ports.
+* **QinQ VLAN Tagging** allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically gives an S-Tag at circuit creation and can't be changed. Each peering on the circuit (Private and Microsoft) will use a unique C-Tag as the VLAN. The C-Tag isn't required to be unique across circuits on the ExpressRoute Direct ports.
* **Dot1Q VLAN Tagging** allows for a single tagged VLAN on a per ExpressRoute Direct port pair basis. A C-Tag used on a peering must be unique across all circuits and peerings on the ExpressRoute Direct port pair.
ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.
### Set up ExpressRoute Direct ### Delete ExpressRoute Direct
ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.
## SLA
-ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales correctly with customer requirements.
+ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales correctly with customer requirements. For more information, see [ExpressRoute SLA](https://azure.microsoft.com/support/legal/sla/expressroute/v1_3/).
## Pricing
firewall Create Ip Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/create-ip-group.md
Previously updated : 06/23/2020 Last updated : 10/31/2022 ms.devlang: azurecli
To create an IP Group by using the Azure portal:
1. Select **Create**. 1. Select your subscription. 1. Select a resource group or create a new one.
-1. Enter a unique name for you IP Group, and then select a region.
+1. Enter a unique name for your IP Group, and then select a region.
1. Select **Next: IP addresses**. 1. Type an IP address, multiple IP addresses, or IP address ranges.
This example creates an IP Group with an address prefix and an IP address by usi
```azurepowershell $ipGroup = @{ Name = 'ipGroup'
- ResourceGroupName = 'ipGroupRG'
- Location = 'West US'
+ ResourceGroupName = 'Test-FW-RG'
+ Location = 'East US'
IpAddress = @('10.0.0.0/24', '192.168.1.10') }
This example creates an IP Group with an address prefix and an IP address by usi
```azurecli-interactive az network ip-group create \
- --name ipGroup \
- --resource-group ipGroupRG \
- --location westus \
+ --name ipGroup \
+ --resource-group Test-FW-RG \
+ --location eastus \
--ip-addresses '10.0.0.0/24' '192.168.1.10' ```
firewall Deploy Availability Zone Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-availability-zone-powershell.md
Previously updated : 11/19/2019 Last updated : 10/31/2022
When the standard public IP address is created, no specific zone is specified. T
It's important to know, because you can't have a firewall in zone 1 and an IP address in zone 2. But you can have a firewall in zone 1 and IP address in all zones, or a firewall and an IP address in the same single zone for proximity purposes. ```azurepowershell
-$rgName = "resourceGroupName"
+$rgName = "Test-FW-RG"
$vnet = Get-AzVirtualNetwork `
- -Name "vnet" `
+ -Name "Test-FW-VN" `
-ResourceGroupName $rgName $pip1 = New-AzPublicIpAddress ` -Name "AzFwPublicIp1" `
- -ResourceGroupName "rg" `
+ -ResourceGroupName "Test-FW-RG" `
-Sku "Standard" `
- -Location "centralus" `
- -AllocationMethod Static
+ -Location "eastus" `
+ -AllocationMethod Static `
+ -Zone 1,2,3
New-AzFirewall ` -Name "azFw" ` -ResourceGroupName $rgName `
- -Location centralus `
+ -Location "eastus" `
-VirtualNetwork $vnet ` -PublicIpAddress @($pip1) ` -Zone 1,2,3
firewall Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-cli.md
description: In this article, you learn how to deploy and configure Azure Firewa
Previously updated : 08/29/2019 Last updated : 10/31/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
For this article, you create a simplified single VNet with three subnets for eas
* **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall. * **Jump-SN** - The "jump" server is in this subnet. The jump server has a public IP address that you can connect to using Remote Desktop. From there, you can then connect to (using another Remote Desktop) the workload server.
-![Tutorial network infrastructure](media/tutorial-firewall-rules-portal/Tutorial_network.png)
In this article, you learn how to:
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Azure Firewall Standard has the following known issues:
|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|This is a current limitation.| |Can't remove first public IP configuration|Each Azure Firewall public IP address is assigned to an *IP configuration*. The first IP configuration is assigned during the firewall deployment, and typically also contains a reference to the firewall subnet (unless configured explicitly differently via a template deployment). You can't delete this IP configuration because it would de-allocate the firewall. You can still change or remove the public IP address associated with this IP configuration if the firewall has at least one other public IP address available to use.|This is by design.| |Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.|
-|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall.
+|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall.
|SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by Azure platform. This is the default platform behavior in Azure, Azure Firewall does not introduce any additional specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25. |SNAT port exhaustion|Azure Firewall currently supports 2496 ports per Public IP address per backend virtual machine scale set instance. By default, there are two virtual machine scale set instances. So, there are 4992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for VNET deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).|
firewall Sql Fqdn Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/sql-fqdn-filtering.md
Previously updated : 06/18/2020 Last updated : 10/31/2022
If you use non-default ports for SQL IaaS traffic, you can configure those ports
## Configure using Azure CLI 1. Deploy an [Azure Firewall using Azure CLI](deploy-cli.md).
-2. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
+1. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
> [!NOTE] > SQL *proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using the SQL [service tag](service-tags.md) in firewall [network rules](tutorial-firewall-deploy-portal.md#configure-a-network-rule).
-3. Create a new rule collection with an application rule using SQL FQDN to allow access to a SQL server:
+1. Create a new rule collection with an application rule using SQL FQDN to allow access to a SQL server:
```azurecli az extension add -n azure-firewall az network firewall application-rule create \
- -g FWRG \
- --f azfirewall \
- --c sqlRuleCollection \
- --priority 1000 \
- --action Allow \
- --name sqlRule \
- --protocols mssql=1433 \
- --source-addresses 10.0.0.0/24 \
- --target-fqdns sql-serv1.database.windows.net
+ --resource-group Test-FW-RG \
+ --firewall-name Test-FW01 \
+ --collection-name sqlRuleCollection \
+ --priority 1000 \
+ --action Allow \
+ --name sqlRule \
+ --protocols mssql=1433 \
+ --source-addresses 10.0.0.0/24 \
+ --target-fqdns sql-serv1.database.windows.net
``` ## Configure using Azure PowerShell 1. Deploy an [Azure Firewall using Azure PowerShell](deploy-ps.md).
-2. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
+1. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
> [!NOTE] > SQL *proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using the SQL [service tag](service-tags.md) in firewall [network rules](tutorial-firewall-deploy-portal.md#configure-a-network-rule).
-3. Create a new rule collection with an application rule using SQL FQDN to allow access to a SQL server:
+1. Create a new rule collection with an application rule using SQL FQDN to allow access to a SQL server:
```azurepowershell
- $AzFw = Get-AzFirewall -Name "azfirewall" -ResourceGroupName "FWRG"
+ $AzFw = Get-AzFirewall -Name "Test-FW01" -ResourceGroupName "Test-FW-RG"
$sqlRule = @{ Name = "sqlRule"
If you use non-default ports for SQL IaaS traffic, you can configure those ports
## Configure using the Azure portal 1. Deploy an [Azure Firewall using Azure CLI](deploy-cli.md).
-2. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
+1. If you filter traffic to Azure SQL Database, Azure Synapse Analytics, or SQL Managed Instance, ensure the SQL connectivity mode is set to **Proxy**. To learn how to switch SQL connectivity mode, see [Azure SQL Connectivity Settings](/azure/azure-sql/database/connectivity-settings#change-the-connection-policy-via-the-azure-cli).
> [!NOTE] > SQL *proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using the SQL [service tag](service-tags.md) in firewall [network rules](tutorial-firewall-deploy-portal.md#configure-a-network-rule).
-3. Add the application rule with the appropriate protocol, port, and SQL FQDN and then select **Save**.
+
+1. Add the application rule with the appropriate protocol, port, and SQL FQDN and then select **Save**.
![application rule with SQL FQDN](media/sql-fqdn-filtering/application-rule-sql.png)
-4. Access SQL from a virtual machine in a VNet that filters the traffic through the firewall.
-5. Validate that [Azure Firewall logs](./firewall-workbook.md) show the traffic is allowed.
+1. Access SQL from a virtual machine in a VNet that filters the traffic through the firewall.
+1. Validate that [Azure Firewall logs](./firewall-workbook.md) show the traffic is allowed.
## Next steps
frontdoor Front Door Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-ddos.md
Title: Azure Front Door - DDoS protection
+ Title: DDoS protection on Azure Front Door
description: This page provides information about how Azure Front Door helps to protect against DDoS attacks -+ na Previously updated : 10/28/2020- Last updated : 10/31/2022+ # DDoS protection on Front Door
Front Door only accepts traffic on the HTTP and HTTPS protocols, and will only p
## Capacity absorption
-Front Door is a massively scaled, globally distributed service. We have many customers, including Microsoft's own large-scale cloud products, that receive hundreds of thousands of requests each second. Front Door is located at the edge of Azure's network, absorbing and geographically isolating large volume attacks. This can prevent malicious traffic from going any further than the edge of the Azure network.
+Front Door is a large scaled, globally distributed service. We have many customers, including Microsoft's own large-scale cloud products that receive hundreds of thousands of requests each second. Front Door is located at the edge of Azure's network, absorbing and geographically isolating large volume attacks. This can prevent malicious traffic from going any further than the edge of the Azure network.
## Caching
Front Door is a massively scaled, globally distributed service. We have many cus
## Web Application Firewall (WAF)
-[Front Door's Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md) can be used to mitigate a number of different types of attacks:
+[Front Door's Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md) can be used to mitigate many different types of attacks:
-* Using the managed rule set provides protection against a number of common attacks.
+* Using the managed rule set provides protection against many common attacks.
* Traffic from outside a defined geographic region, or within a defined region, can be blocked or redirected to a static webpage. For more information, see [Geo-filtering](../web-application-firewall/afds/waf-front-door-geo-filtering.md). * IP addresses and ranges that you identify as malicious can be blocked. * Rate limiting can be applied to prevent IP addresses from calling your service too frequently.
Front Door is a massively scaled, globally distributed service. We have many cus
## For further protection
-If you require further protection, then you can enable [Azure DDoS Protection Standard](../ddos-protection/ddos-protection-overview.md) on the VNet where your back-ends are deployed. DDoS Protection Standard customers receive additional benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
+If you require further protection, then you can enable [Azure DDoS Protection Standard](../ddos-protection/ddos-protection-overview.md) on the VNet where your back-ends are deployed. DDoS Protection Standard customers receive extra benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
## Next steps -- Learn how to configure a [WAF profile on Front Door](front-door-waf.md). -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn how to configure a [WAF policy for Azure Front Door](front-door-waf.md).
+- Learn how to [create an Azure Front Door profile](quickstart-create-front-door.md).
+- Learn [how Azure Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-http-headers-protocol.md
Title: Protocol support for HTTP headers in Azure Front Door | Microsoft Docs description: This article describes HTTP header protocols that Front Door supports. na Previously updated : 08/10/2021 Last updated : 10/31/2022 # Protocol support for HTTP headers in Azure Front Door
-This article outlines the protocol that Front Door supports with parts of the call path (see image). The following sections provide more information about HTTP headers supported by Front Door.
+
+This article outlines the protocol that Front Door supports with parts of the call path (see image). In the following sections, you'll find information about HTTP headers supported by Front Door.
:::image type="content" source="./media/front-door-http-headers-protocol/front-door-protocol-summary.png" alt-text="Azure Front Door HTTP headers protocol":::
->[!IMPORTANT]
->Front Door doesn't certify any HTTP headers that aren't documented here.
+> [!IMPORTANT]
+> Front Door doesn't certify any HTTP headers that aren't documented here.
-## Client to Front Door
+## From client to the Front Door
-Front Door accepts most headers for the incoming request without modifying them. Some reserved headers are removed from the incoming request if sent, including headers with the X-FD-* prefix.
+Azure Front Door accepts most headers for the incoming request without modifying them. Some reserved headers are removed from the incoming request if sent, including headers with the X-FD-* prefix.
-The debug request header, "X-Azure-DebugInfo", provides additional debugging information about the Front Door. You need to send "X-Azure-DebugInfo: 1" request header from client to Front Door to receive [optional response headers](#optional-debug-response-headers) from Front Door to client.
+The debug request header, "X-Azure-DebugInfo", provides extra debugging information about the Front Door. You'll need to send "X-Azure-DebugInfo: 1" request header from the client to the AzureFront Door to receive [optional response headers](#optional-debug-response-headers) when Front Door response to the client.
-## Front Door to backend
+## From the Front Door to the backend
-Front Door includes headers for an incoming request unless they're removed because of restrictions. Front Door also adds the following headers:
+Azure Front Door includes headers for an incoming request unless they're removed because of restrictions. Front Door also adds the following headers:
| Header | Example and description | | - | - |
Front Door includes headers for an incoming request unless they're removed becau
| X-Forwarded-Proto | *X-Forwarded-Proto: http* </br> The X-Forwarded-Proto HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. | | X-FD-HealthProbe | X-FD-HealthProbe HTTP header field is used to identify the health probe from Front Door. If this header is set to 1, the request is from the health probe. It can be used to restrict access from Front Door with a particular value for the X-Forwarded-Host header field. |
-## Front Door to client
+## From the Front Door to the client
-Any headers sent to Front Door from the backend are also passed through to the client. The following are headers sent from Front Door to clients.
+Any headers sent to Azure Front Door from the backend are also passed through to the client. The following are headers sent from the Front Door to clients.
| Header | Example and description | | - | - | | X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> This is a unique reference string that identifies a request served by Front Door, which is critical for troubleshooting as it's used to search access logs.|
-| X-Cache | *X-Cache:* This header describes the caching status of the request <br/> - *X-Cache: TCP_HIT* : The first byte of the request is a cache hit in the Front Door edge. <br/> - *X-Cache: TCP_REMOTE_HIT*: The first byte of the request is a cache hit in the regional cache (origin shield layer) but a miss in the edge cache. <br/> - *X-Cache: TCP_MISS*: The first byte of the request is a cache miss, and the content is served from the origin. <br/> - *X-Cache: PRIVATE_NOSTORE* : Request cannot be cached as Cache-Control response header is set to either private or no-store. <br/> - *X-Cache: CONFIG_NOCACHE*: Request is configured to not cache in the Front Door profile. |
+| X-Cache | *X-Cache:* This header describes the caching status of the request <br/> - *X-Cache: TCP_HIT*: The first byte of the request is a cache hit in the Front Door edge. <br/> - *X-Cache: TCP_REMOTE_HIT*: The first byte of the request is a cache hit in the regional cache (origin shield layer) but a miss in the edge cache. <br/> - *X-Cache: TCP_MISS*: The first byte of the request is a cache miss, and the content is served from the origin. <br/> - *X-Cache: PRIVATE_NOSTORE*: Request can't be cached as Cache-Control response header is set to either private or no-store. <br/> - *X-Cache: CONFIG_NOCACHE*: Request is configured to not cache in the Front Door profile. |
### Optional debug response headers
You need to send "X-Azure-DebugInfo: 1" request header to enable the following o
## Next steps -- [Create a Front Door](quickstart-create-front-door.md)-- [How Front Door works](front-door-routing-architecture.md)
+* Learn how to [create an Azure Front Door profile](quickstart-create-front-door.md).
+* Learn about [how Azure Front Door works](front-door-routing-architecture.md).
frontdoor How To Create Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-create-origin.md
- Title: Set up an Azure Front Door Standard/Premium Origin
-description: This article shows how to configure an origin with Endpoint Manager.
---- Previously updated : 02/18/2021---
-# Set up an Azure Front Door Standard/Premium Origin
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium. Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article will show you how to create an Azure Front Door Standard/Premium origin in an existing origin group.
-
-## Prerequisites
-
-Before you can create an Azure Front Door Standard/Premium origin, you must have created at least one origin group.
-
-## Create a new Azure Front Door Standard/Premium Origin
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
-
-1. Select **Origin Group**. Then select **+ Add** to create a new origin group.
-
- :::image type="content" source="../media/how-to-create-origin/select-add-origin.png" alt-text="Screenshot of origin group landing page.":::
-
-1. On the **Add an origin group** page, enter a unique **Name** for the new origin group.
-
-1. Then select **+ Add an Origin** to add a new origin to this origin group.
-
- :::image type="content" source="../media/how-to-create-origin/add-origin-view.png" alt-text="Screenshot of add an origin page.":::
-
- | Setting | Value |
- | | |
- | Name | Enter a unique name for the new Azure Front Door origin. |
- | Origin Type | The type of resource you want to add. Azure Front Door Standard/Premium supports autodiscovery of your app origin from app service, cloud service, or storage. If you want a different resource in Azure or a non-Azure backend, select **Custom host**. |
- | Host Name | If you didn't select **Custom host** for origin host type, select your backend by choosing the origin host name in the dropdown. |
- | Origin Host Header | Enter the host header value being sent to the backend for each request. For more information, see [Origin host header](concept-origin.md#hostheader). |
- | HTTP Port | Enter the value for the port that the origin supports for HTTP protocol. |
- | HTTPS Port | Enter the value for the port that the origin supports for HTTPS protocol. |
- | Priority | Assign priorities to your different origin when you want to use a primary service origin for all traffic. Also, provide backups if the primary or the backup origin is unavailable. For more information, see [Priority](concept-origin.md#priority). |
- | Weight | Assign weights to your different origins to distribute traffic across a set of origins, either evenly or according to weight coefficients. For more information, see [Weights](concept-origin.md#weighted). |
- | Status | Select this option to enable origin. |
- | Rule | Select Rule Sets that will be applied to this Route. For more information about how to configure Rules, see [Configure a Rule Set for Azure Front Door](how-to-configure-rule-set.md) |
-
- > [!IMPORTANT]
- > During configuration, APIs don't validate if the origin is inaccessible from Front Door environments. Make sure that Front Door can reach your origin.
-
-1. Select **Add** to create the new origin. The created origin should appear in the origin list with the group. The health probe path is case sensitive.
-
- :::image type="content" source="../media/how-to-create-origin/origin-list-view.png" alt-text="Screenshot of origin in list view.":::
-
-1. Select **Add** to add the origin group to current endpoint. The origin group should appear within the Origin group panel.
-
-## Clean up resources
-To delete an Origin group when you no longer needed it, click the **...** and then select **Delete** from the drop-down.
--
-To delete an origin when you no longer need it, click the **...** and then select **Delete** from the drop-down.
--
-## Next steps
-
-To learn about custom domains, see [adding a custom domain](how-to-add-custom-domain.md) to your Azure Front Door Standard/Premium endpoint.
healthcare-apis Deploy 02 New Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md
Title: Deploy the MedTech service with a QuickStart template - Azure Health Data Services
-description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using a QuickStart template.
+description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using a Quickstart template.
Previously updated : 10/28/2022 Last updated : 11/01/2022
iot-dps Quick Setup Auto Provision Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-terraform.md
+
+ Title: Quickstart - Use Terraform to create a DPS instance
+description: Learn how to deploy an Azure IoT Device Provisioning Service (DPS) resource with Terraform in this quickstart.
+keywords: azure, devops, terraform, device provisioning service, DPS, IoT, IoT Hub DPS
+ Last updated : 10/27/2022+++++++
+# Quickstart: Use Terraform to create an Azure IoT Device Provisioning Service
+
+In this quickstart, you will learn how to deploy an Azure IoT Hub Device Provisioning Service (DPS) resource with a hashed allocation policy using Terraform.
+
+This quickstart was tested with the following Terraform and Terraform provider versions:
+
+- [Terraform v1.2.8](https://releases.hashicorp.com/terraform/)
+- [AzureRM Provider v.3.20.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+
+[Terraform](https://www.terraform.io/) enables the definition, preview, and deployment of cloud infrastructure. Using Terraform, you create configuration files using HCL syntax. The [HCL syntax](https://www.terraform.io/language/syntax/configuration) allows you to specify the cloud provider - such as Azure - and the elements that make up your cloud infrastructure. After you create your configuration files, you create an execution plan that allows you to preview your infrastructure changes before they're deployed. Once you verify the changes, you apply the execution plan to deploy the infrastructure.
+
+In this article, you learn how to:
+
+- Create a Storage Account & Storage Container
+- Create an Event Hubs, Namespace, & Authorization Rule
+- Create an IoT Hub
+- Link IoT Hub to Storage Account endpoint & Event Hubs endpoint
+- Create an IoT Hub Shared Access Policy
+- Create a DPS Resource
+- Link DPS & IoT Hub
+
+## Prerequisites
++
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The example code in this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/). See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/developer/terraform/)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-iot-hub-with-device-provisioning-service/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-iot-hub-with-device-provisioning-service/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-iot-hub-with-device-provisioning-service/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-iot-hub-with-device-provisioning-service/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+**Azure CLI**
+Run [az iot dps show](/cli/azure/iot/dps#az-iot-dps-show) to display the Azure DPS resource.
+
+ ```azurecli
+ az iot dps show \
+ --name my_terraform_dps \
+ --resource-group rg
+ ```
+
+**Azure PowerShell**
+Run [Get-AzIoTDeviceProvisioningService](/powershell/module/az.deviceprovisioningservices/get-aziotdeviceprovisioningservice) to display the Azure DPS resource.
+
+ ```powershell
+ Get-AzIoTDeviceProvisioningService `
+ -ResourceGroupName "rg" `
+ -Name "my_terraform_dps"
+ ```
+
+The names of the resource group and the DPS instance are displayed in the terraform apply output. You can also run the [terraform output](https://www.terraform.io/cli/commands/output) command to view these output values.
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+Now that you have an instance of the Device Provisioning Service, continue to the next quickstart to provision a simulated device to IoT hub:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Provision a simulated symmetric key device](./quick-create-simulated-device-symm-key.md)
iot-edge How To Share Windows Folder To Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-share-windows-folder-to-vm.md
The Azure IoT Edge for Linux on Windows (EFLOW) virtual machine is isolated from
This article shows you how to enable the folder sharing between the Windows host OS and the EFLOW virtual machine. ## Prerequisites-- Azure IoT Edge for Linux on Windows 1.3.1.30082 update or higher. For more information about EFLOW release notes, see [EFLOW Releases](https://aka.ms/AzEFLOW-Releases).
+- Azure IoT Edge for Linux on Windows 1.3.1.02092 update or higher. For more information about EFLOW release notes, see [EFLOW Releases](https://aka.ms/AzEFLOW-Releases).
- A machine with an x64/x86 processor. - Windows 11 Sun Valley 2 (build 22621) or higher. To get Windows SV2 update, you must be part of Windows Insider Program. For more information, see [Getting started with the Windows Insider Program](https://insider.windows.com/en-us/getting-started). After installation, you can verify your build version by running `winver` at the command prompt.
iot-hub-device-update Configure Access Control Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/configure-access-control-device-update.md
+
+ Title: Configure Access Control in Device Update for IoT Hub | Microsoft Docs
+description: Configure Access Control in Device Update for IoT Hub.
++ Last updated : 10/31/2022++++
+# Configure access control roles for Device Update resources
+
+In order for users to have access to Device Update, they must be granted access to the Device Update account, Instance and set the required access to the linked IoT hub.
+
+## Configure access control for Device Update account
+
+# [Azure portal](#tab/portal)
+
+1. In your Device Update account, select **Access control (IAM)** from the navigation menu.
+
+ :::image type="content" source="media/create-device-update-account/account-access-control.png" alt-text="Screenshot of access Control within Device Update account." lightbox="media/create-device-update-account/account-access-control.png":::
+
+2. Select **Add role assignments**.
+
+3. On the **Role** tab, select a Device Update role from the available options:
+
+ * Device Update Administrator
+ * Device Update Reader
+ * Device Update Content Administrator
+ * Device Update Content Reader
+ * Device Update Deployments Administrator
+ * Device Update Deployments Reader
+
+ For more information, [Learn about Role-based access control in Device Update for IoT Hub](device-update-control-access.md).
+
+ :::image type="content" source="media/create-device-update-account/role-assignment.png" alt-text="Screenshot of access Control role assignments within Device Update account." lightbox="media/create-device-update-account/role-assignment.png":::
+
+4. Select **Next**
+5. On the **Members** tab, select the users or groups that you want to assign the role to.
+
+ :::image type="content" source="media/create-device-update-account/role-assignment-2.png" alt-text="Screenshot of access Control member selection within Device Update account." lightbox="media/create-device-update-account/role-assignment-2.png":::
+
+6. Select **Review + assign**
+7. Review the new role assignments and select **Review + assign** again
+8. You're now ready to use Device Update from within your IoT Hub
+
+# [Azure CLI](#tab/cli)
+
+The following roles are available for assigning access to Device Update:
+
+* Device Update Administrator
+* Device Update Reader
+* Device Update Content Administrator
+* Device Update Content Reader
+* Device Update Deployments Administrator
+* Device Update Deployments Reader
+
+For more information, [Learn about Role-based access control in Device Update for IoT Hub](device-update-control-access.md).
+
+Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to configure access control for your Device Update account.
+
+Replace the following placeholders with your own information:
+
+* *\<role>*: The Device Update role that you're assigning.
+* *\<user_group>*: The user or group that you want to assign the role to.
+* *\<account_id>*: The resource ID for the Device Update account that the user or group will get access to. You can retrieve the resource ID by using the [az iot du account show](/cli/azure/iot/du/account#az-iot-du-account-show) command and querying for the ID value: `az iot du account show -n <account_name> --query id`.
+
+```azurecli-interactive
+az role assignment create --role '<role>' --assignee <user_group> --scope <account_id>
+```
++
+## Configure access for Azure Device Update service principal in linked IoT hub
+
+Device Update for IoT Hub communicates with IoT Hub to manage deployments and updates and to get information about devices. To enable the access, you need to give the **Azure Device Update** service principal access with the **IoT Hub Data Contributor** role.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, navigate to the IoT hub connected to your Device Update instance.
+
+ :::image type="content" source="media/create-device-update-account/navigate-to-iot-hub.png" alt-text="Screenshot of instance and linked IoT hub." lightbox="media/create-device-update-account/navigate-to-iot-hub.png":::
+
+1. Select **Access Control(IAM)** from the navigation menu. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source="media/create-device-update-account/iot-hub-access-control.png" alt-text="Screenshot of access Control within IoT Hub." lightbox="media/create-device-update-account/iot-hub-access-control.png":::
+
+3. In the **Role** tab, select **IoT Hub Data Contributor**. Select **Next**.
+
+ :::image type="content" source="media/create-device-update-account/role-assignment-iot-hub.png" alt-text="Screenshot of access Control role assignment within IoT Hub." lightbox="media/create-device-update-account/role-assignment-iot-hub.png":::**
+
+4. For **Assign access to**, select **User, group, or service principal**. Select **Select Members** and search for '**Azure Device Update**'
+
+ :::image type="content" source="media/create-device-update-account/assign-role-to-du-service-principal.png" alt-text="Screenshot of access Control member selection for IoT Hub." lightbox="media/create-device-update-account/assign-role-to-du-service-principal.png":::
+
+6. Select **Next** > **Review + Assign**
+
+To validate that you've set permissions correctly:
+
+1. In the Azure portal, navigate to the IoT hub connected to your Device Update instance.
+1. Select **Access Control(IAM)** from the navigation menu.
+1. Select **Check access**.
+1. Select **User, group, or service principal** and search for '**Azure Device Update**'
+1. After clicking on **Azure Device Update**, verify that the **IoT Hub Data Contributor** role is listed under **Role assignments**
+
+# [Azure CLI](#tab/cli)
+
+Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to create a role assignment for the Azure Device Update service principal.
+
+Replace *\<resource_id>* with the resource ID of your IoT hub. You can retrieve the resource ID by using the [az iot hub show](/cli/azure/iot/hub#az-iot-hub-show) command and querying for the ID value: `az iot hub show -n <hub_name> --query id`.
+
+```azurecli
+az role assignment create --role "IoT Hub Data Contributor" --assignee https://api.adu.microsoft.com/ --scope <resource_id>
+```
++
+## Next steps
+
+Try updating a device using one of the following quick tutorials:
+
+* [Update a simulated IoT Edge device](device-update-simulator.md)
+* [Update a Raspberry Pi](device-update-raspberry-pi.md)
+* [Update an Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
iot-hub-device-update Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/configure-private-endpoints.md
You can use either the Azure portal or the Azure CLI to create private endpoints
1. Fill all the required fields on the **Resource** tab * **Connection method**: Select **Connect to an Azure resource by resource ID or alias**.
- * **Resource ID or alias**: Enter the Resource ID of the Device Update account. You can retrieve the resource ID of a Device Update account from the Azure portal by selecting **JSON View** on the **Overview** page. Or, you can retrieve it by using the [az iot device-update account show](/cli/azure/iot/device-update/account#az-iot-device-update-account-show) command and querying for the ID value: `az iot device-update account show -n <account_name> --query id`.
+ * **Resource ID or alias**: Enter the Resource ID of the Device Update account. You can retrieve the resource ID of a Device Update account from the Azure portal by selecting **JSON View** on the **Overview** page. Or, you can retrieve it by using the [az iot du account show](/cli/azure/iot/du/account#az-iot-du-account-show) command and querying for the ID value: `az iot du account show -n <account_name> --query id`.
* **Target sub-resource**: Value must be **DeviceUpdate** :::image type="content" source="./media/configure-private-endpoints/private-endpoint-manual-create.png" alt-text="Screenshot showing the Resource page of the Create a private endpoint tab in Private Link Center.":::
Replace the following placeholders with your own information:
* **PRIVATE_LINK_CONNECTION_NAME**: Name of the private link service connection. * **VIRTUAL_NETWORK_NAME**: Name of an existing virtual network associated with the subnet. * **SUBNET_NAME**: Name or ID of an existing subnet. If you use a subnet name, then you also need to include the virtual network name. If you use a subnet ID, you can omit the `--vnet-name` parameter.
-* **DEVICE_UPDATE_RESOURCE_ID**: You can retrieve the resource ID of a Device Update account from the Azure portal by selecting **JSON View** on the **Overview** page. Or, you can retrieve it by using the [az iot device-update account show](/cli/azure/iot/device-update/account#az-iot-device-update-account-show) command and querying for the ID value: `az iot device-update account show -n <account_name> --query id`.
+* **DEVICE_UPDATE_RESOURCE_ID**: You can retrieve the resource ID of a Device Update account from the Azure portal by selecting **JSON View** on the **Overview** page. Or, you can retrieve it by using the [az iot du account show](/cli/azure/iot/du/account#az-iot-du-account-show) command and querying for the ID value: `az iot du account show -n <account_name> --query id`.
* **LOCATION**: Name of the Azure region. Your private endpoint must be in the same region as your virtual network, but can in a different region from the Device Update account. ```azurecli-interactive
There are four provisioning states:
# [Azure CLI](#tab/cli)
-Use the [az iot device-update account private-endpoint-connection set](/cli/azure/iot/device-update/account/private-endpoint-connection#az-iot-device-update-account-private-endpoint-connection-set) command to manage private endpoint connection.
+Use the [az iot du account private-endpoint-connection set](/cli/azure/iot/du/account/private-endpoint-connection#az-iot-du-account-private-endpoint-connection-set) command to manage private endpoint connection.
Replace the following placeholders with your own information:
Replace the following placeholders with your own information:
* **STATUS**: Either `Approved` or `Rejected`. ```azurecli-interactive
-az iot device-update account private-endpoint-connection set \
+az iot du account private-endpoint-connection set \
--name <ACCOUNT_NAME> \ --connection-name <PRIVATE_LINK_CONNECTION_NAME> \ --status <STATUS> \
iot-hub-device-update Create Device Update Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-device-update-account.md
Title: Create a device update account in Device Update for Azure IoT Hub | Microsoft Docs description: Create a device update account in Device Update for Azure IoT Hub.-- Previously updated : 06/23/2022++ Last updated : 10/30/2022 - # Device Update for IoT Hub resource management
To get started with Device Update you'll need to create a Device Update account
# [Azure portal](#tab/portal)
-An IoT hub. It's recommended that you use an S1 (Standard) tier or above.
+An IoT hub. It's required that you use an S1 (Standard) tier or above.
# [Azure CLI](#tab/cli)
-* An IoT hub. It's recommended that you use an S1 (Standard) tier or above.
+* An IoT hub. It's required that you use an S1 (Standard) tier or above.
* An Azure CLI environment:
An IoT hub. It's recommended that you use an S1 (Standard) tier or above.
2. Select **Create** > **Device Update for IoT Hub**
-3. On the **Basics** tab, provide the following information for your Device Update account:
+3. On the **Basics** tab, provide the following information for your Device Update account and instance:
* **Subscription**: The Azure subscription to be associated with your Device Update account. * **Resource group**: An existing or new resource group. * **Name**: A name for your account. * **Location**: The Azure region where your account will be located. For information about which regions support Device Update for IoT Hub, see [Azure Products-by-region page](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).-
+ * Check the box to assign the Device Update administrator role to yourself. You can also use the steps listed in the [Configure access control roles](configure-access-control-device-update.md) section to provide a combination of roles to users and applications for the right level of access. You need to have Owner or User Access Administrator permissions in your subscription to manage roles.
+ * **Instance Name**: A name for your instance.
+ * **IoT Hub Name**: Select the IoT Hub you want to link to your Device Update instance
+ * Check the box to grant the right access to Azure Device Update service principal in the IoT Hub to set up and operate the Device Update Service. You need to have the right permissions to add access.
> [!NOTE]
- > Your Device Update account doesn't need to be in the same region as your IoT hubs, but for better performance it is recommended that you keep them geographically close.
+ > If you are unable to grant access to Azure Device Update service principal during resource creation, refer to [configure the access control for users and Azure Device Update service principal](configure-access-control-device-update.md) . If this access is not set you will not be able to run deployment, device management and diagnostic operations. Learn more about the [Azure Device Update service principal access](device-update-control-access.md#configuring-access-for-azure-device-update-service-principal-in-the-iot-hub).
:::image type="content" source="media/create-device-update-account/account-details.png" alt-text="Screenshot of account details." lightbox="media/create-device-update-account/account-details.png":::
-4. Optionally, you can check the box to assign the Device Update administrator role to yourself. You can also use the steps listed in the [Configure access control roles](#configure-access-control-roles-for-device-update) section to provide a combination of roles to users and applications for the right level of access.
-
- You need to have Owner or User Access Administrator permissions in your subscription to manage roles.
-
-5. Select **Next: Instance**
-
- An *instance* of Device Update is associated with a single IoT hub. Select the IoT hub that will be used with Device Update. When you link an IoT hub to a Device Update instance, a new shared access policy is automatically created give Device Update permissions to work with IoT Hub (registry write and service connect). This policy ensures that access is only limited to Device Update.
+5. Select **Next: Diagnostics**. Enabling Microsoft diagnostics, gives Microsoft permission to collect, store, and analyze diagnostic log files from your devices when they encounter an update failure. In order to enable remote log collection for diagnostics, you need to link your Device Update instance to your Azure Blob storage account. Selecting the Azure Storage account will automatically update the storage details.
-6. On the **Instance** tab, provide the following information for your Device Update instance:
+ :::image type="content" source="media/create-device-update-account/account-diagnostics.png" alt-text="Screenshot of diagnostic details." lightbox="media/create-device-update-account/account-diagnostics.png":::
- * **Name**: A name for your instance.
- * **IoT Hub details**: Select an IoT hub to link to this instance.
+6. On the **Networking** tab, to continue creating Device Update account and instance.
+ Choose the endpoints that devices can use to connect to your Device Update instance. Accept the default setting, Public access, for this example.
- :::image type="content" source="media/create-device-update-account/instance-details.png" alt-text="Screenshot of instance details." lightbox="media/create-device-update-account/instance-details.png":::
+ :::image type="content" source="media/create-device-update-account/account-networking.png" alt-text="Screenshot of networking details." lightbox="media/create-device-update-account/account-networking.png":::
7. Select **Next: Review + Create**. After validation, select **Create**.
An IoT hub. It's recommended that you use an S1 (Standard) tier or above.
8. You'll see that your deployment is in progress. The deployment status will change to "complete" in a few minutes. When it does, select **Go to resource**
- :::image type="content" source="media/create-device-update-account/account-complete.png" alt-text="Screenshot of account deployment complete." lightbox="media/create-device-update-account/account-complete.png":::
- # [Azure CLI](#tab/cli)
-Use the [az iot device-update account create](/cli/azure/iot/device-update/account#az-iot-device-update-account-create) command to create a new Device Update account.
+Use the [az iot du account create](/cli/azure/iot/du/account#az-iot-du-account-create) command to create a new Device Update account.
Replace the following placeholders with your own information:
Replace the following placeholders with your own information:
> Your Device Update account doesn't need to be in the same region as your IoT hubs, but for better performance it is recommended that you keep them geographically close. ```azurecli-interactive
-az iot device-update account create --resource-group <resource_group> --account <account_name> --location <region>
+az iot du account create --resource-group <resource_group> --account <account_name> --location <region>
```
-Use the [az iot device-update instance create](/cli/azure/iot/device-update/instance#az-iot-device-update-instance-create) command to create a Device Update instance.
+Use the [az iot du instance create](/cli/azure/iot/du/instance#az-iot-du-instance-create) command to create a Device Update instance.
An *instance* of Device Update is associated with a single IoT hub. Select the IoT hub that will be used with Device Update. When you link an IoT hub to a Device Update instance, a new shared access policy is automatically created give Device Update permissions to work with IoT Hub (registry write and service connect). This policy ensures that access is only limited to Device Update.
Replace the following placeholders with your own information:
* *\<iothub_id>*: The resource ID for the IoT hub that will be linked to this instance. You can retrieve your IoT hub resource ID by using the [az iot hub show](/cli/azure/iot/hub#az-iot-hub-show) command and querying for the ID value: `az iot hub show -n <iothub_name> --query id`. ```azurecli-interactive
-az iot device-update instance create --account <account_name> --instance <instance_name> --iothub-ids <iothub_id>
+az iot du instance create --account <account_name> --instance <instance_name> --iothub-ids <iothub_id>
``` >[!TIP]
az iot device-update instance create --account <account_name> --instance <instan
-## Configure access control roles for Device Update
-
-In order for other users to have access to Device Update, they must be granted access to this resource. You can skip this step if you assigned the Device Update administrator role to yourself during account creation and don't need to provide access to other users or applications.
-
-# [Azure portal](#tab/portal)
-
-1. In your Device Update account, select **Access control (IAM)** from the navigation menu.
-
- :::image type="content" source="media/create-device-update-account/account-access-control.png" alt-text="Screenshot of access Control within Device Update account." lightbox="media/create-device-update-account/account-access-control.png":::
-
-2. Select **Add role assignments**.
-
-3. On the **Role** tab, select a Device Update role from the available options:
-
- * Device Update Administrator
- * Device Update Reader
- * Device Update Content Administrator
- * Device Update Content Reader
- * Device Update Deployments Administrator
- * Device Update Deployments Reader
-
- For more information, [Learn about Role-based access control in Device Update for IoT Hub](device-update-control-access.md).
-
- :::image type="content" source="media/create-device-update-account/role-assignment.png" alt-text="Screenshot of access Control role assignments within Device Update account." lightbox="media/create-device-update-account/role-assignment.png":::
-
-4. Select **Next**
-5. On the **Members** tab, select the users or groups that you want to assign the role to.
-
- :::image type="content" source="media/create-device-update-account/role-assignment-2.png" alt-text="Screenshot of access Control member selection within Device Update account." lightbox="media/create-device-update-account/role-assignment-2.png":::
-
-6. Select **Review + assign**
-7. Review the new role assignments and select **Review + assign** again
-8. You're now ready to use Device Update from within your IoT Hub
-
-# [Azure CLI](#tab/cli)
-
-The following roles are available for assigning access to Device Update:
-
-* Device Update Administrator
-* Device Update Reader
-* Device Update Content Administrator
-* Device Update Content Reader
-* Device Update Deployments Administrator
-* Device Update Deployments Reader
-
-For more information, [Learn about Role-based access control in Device Update for IoT Hub](device-update-control-access.md).
-
-Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to configure access control for your Device Update account.
-
-Replace the following placeholders with your own information:
-
-* *\<role>*: The Device Update role that you're assigning.
-* *\<user_group>*: The user or group that you want to assign the role to.
-* *\<account_id>*: The resource ID for the Device Update account that the user or group will get access to. You can retrieve the resource ID by using the [az iot device-update account show](/cli/azure/iot/device-update/account#az-iot-device-update-account-show) command and querying for the ID value: `az iot device-update account show -n <account_name> --query id`.
-
-```azurecli-interactive
-az role assignment create --role '<role>' --assignee <user_group> --scope <account_id>
-```
---
-## Configure access control roles for IoT Hub
-
-Device Update for IoT Hub communicates with IoT Hub to manage deployments and updates and to get information about devices. To enable the access, you need to give the **Azure Device Update** service principal access with the **IoT Hub Data Contributor** role.
-
-# [Azure portal](#tab/portal)
-
-1. In the Azure portal, navigate to the IoT hub connected to your Device Update instance.
-1. Select **Access Control(IAM)** from the navigation menu.
-1. Select **Add** > **Add role assignment**.
-1. In the **Role** tab, select **IoT Hub Data Contributor**. Select **Next**.
-1. For **Assign access to**, select **User, group, or service principal**.
-1. Select **Select Members** and search for '**Azure Device Update**'
-1. Select **Next** > **Review + Assign**
-
-To validate that you've set permissions correctly:
-
-1. In the Azure portal, navigate to the IoT hub connected to your Device Update instance.
-1. Select **Access Control(IAM)** from the navigation menu.
-1. Select **Check access**.
-1. Select **User, group, or service principal** and search for '**Azure Device Update**'
-1. After clicking on **Azure Device Update**, verify that the **IoT Hub Data Contributor** role is listed under **Role assignments**
-
-# [Azure CLI](#tab/cli)
-
-Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to create a role assignment for the Azure Device Update service principal.
-
-Replace *\<resource_id>* with the resource ID of your IoT hub. You can retrieve the resource ID by using the [az iot hub show](/cli/azure/iot/hub#az-iot-hub-show) command and querying for the ID value: `az iot hub show -n <hub_name> --query id`.
-
-```azurecli
-az role assignment create --role "IoT Hub Data Contributor" --assignee https://api.adu.microsoft.com/ --scope <resource_id>
-```
---
-## View and query accounts or instances
-
-You can view, sort, and query all of your Device Update accounts and instances.
-
-# [Azure portal](#tab/portal)
-
-1. To view all Device Update accounts, use the Azure portal to search for the **Device Update for IoT Hubs** service.
-
- * Use the **Grouping** dropdown menu to group account by subscription, resource group, location, and other conditions.
- * Select **Add filter** to filter the list of accounts by resource group, location, tags, and other conditions.
-
-1. To view all instances in an account, navigate to that account in the Azure portal. Select **Instances** from the **Instance management** section of the menu
-
- * Use the search box to filter instances.
-
-# [Azure CLI](#tab/cli)
-
-To view all Device Update accounts, use the [az iot device-update account list](/cli/azure/iot/device-update/account#az-iot-device-update-account-list) command.
-
-```azurecli-interactive
-az iot device-update account list
-```
-
-To view all instances in an account, use the [az iot device-update instance list](/cli/azure/iot/device-update/instance#az-iot-device-update-instance-list) command.
-
-```azurecli-interactive
-az iot device-update instance list --account <account_name>
-```
-
-Both `list` commands support additional grouping and filter operations. Use the `--query` argument to find accounts or instances based on conditions like tags. For example, `--query "[?tags.env == 'test']"`. Use the `--output` argument to format the results. For example, `--output table`.
--
+Once you have created the resource, [configure the access control for users and Azure Device Update service principal](configure-access-control-device-update.md).
## Next steps
-Try updating a device using one of the following quick tutorials:
-
-* [Update a simulated IoT Edge device](device-update-simulator.md)
-* [Update a Raspberry Pi](device-update-raspberry-pi.md)
-* [Update an Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-[Learn about Device update account and instance.](device-update-resources.md)
-
-[Learn about Device update access control roles](device-update-control-access.md)
+* [Configure Access Control in Device Update for IoT Hub ](configure-access-control-device-update.md)
+* [Learn about Device update account and instance.](device-update-resources.md)
+* [Learn about Device update access control roles](device-update-control-access.md)
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update-group.md
Title: Create device group in Device Update for Azure IoT Hub | Microsoft Docs
-description: Create a device group in Device Update for Azure IoT Hub
+ Title: Manage device groups in Device Update for Azure IoT Hub | Microsoft Docs
+description: Configure device groups in Device Update for Azure IoT Hub by using twin tags
Previously updated : 2/17/2021 Last updated : 10/31/2022
-# Create device groups in Device Update for IoT Hub
+# Manage device groups in Device Update for IoT Hub
-Device Update for IoT Hub allows deploying an update to a group of IoT devices.
+Device Update for IoT Hub allows deploying an update to a group of IoT devices. This step is optional when deploying updates to your managed devices. You can deploy updates to your devices using the default group that is created for you. Alternatively, you can assign a user-defined tag to your devices, and they'll be automatically grouped based on the tag and the device compatibility properties.
-> [!NOTE]
-> If you would like to deploy to a default group instead of a user-created group, you can directly move to [How to deploy an update](deploy-update.md)
+> [!NOTE]
+> If you would like to deploy to a default group instead of a user-created group, continue to [How to deploy an update](deploy-update.md).
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). It is recommended that you use a S1 (Standard) tier or above for your IoT Hub.
+* Access to [an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). We recommend that you use an S1 (Standard) tier or above for your IoT Hub.
* An IoT device (or simulator) provisioned for Device Update within IoT Hub.
-* [At least one update has been successfully imported for the provisioned device.](import-update.md)
-* Install and start the Device Update agent on your IoT device either as a [module or device level identity](device-update-agent-provisioning.md)
+ * Install and start the [Device Update agent](device-update-agent-provisioning.md) on your IoT device either as a module- or device-level identity.
+* An [imported update for the provisioned device](import-update.md).
+
+# [Azure portal](#tab/portal)
+
+Supported browsers:
+
+* [Microsoft Edge](https://www.microsoft.com/edge)
+* Google Chrome
+
+# [Azure CLI](#tab/cli)
+
+An Azure CLI environment:
+
+* Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+* Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+
+ 1. Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ 2. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ 3. When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
++ ## Add a tag to your devices
-Device Update for IoT Hub allows deploying an update to a group of IoT devices. To create a group, the first step is to add a tag to the target set of devices in IoT Hub. Tags can only be successfully added to your device after it has been connected to Device Update.
+To create a device group, the first step is to add a tag to the target set of devices in IoT Hub. Tags can only be successfully added to your device after it has been connected to Device Update.
+
+Device Update tags use the following format:
+
+```json
+"tags": {
+ "ADUGroup": "<CustomTagValue>"
+}
+```
+
+For more information and examples of twin JSON syntax, see [Understand and use device twins](../iot-hub/iot-hub-devguide-device-twins.md) or [Understand and use module twins](../iot-hub/iot-hub-devguide-module-twins.md).
-The below documentation describes how to add and update a tag.
+The following sections describe different ways to add and update tags.
-### Programmatically update device twins
+### Add tags with SDKs
-You can update a device twin with the appropriate tag using RegistryManager after enrolling the device with Device Update.
+You can update the device or module twin with the appropriate tag using RegistryManager after enrolling the device with Device Update. For more information, see the following articles:
* [Learn how to add tags using a sample .NET app.](../iot-hub/iot-hub-csharp-csharp-twin-getstarted.md) * [Learn about tag properties](../iot-hub/iot-hub-devguide-device-twins.md#tags-and-properties-format).
-#### Device Update tag format
+Add tags to the device twin if your Device Update agent is provisioned with device identity, or to the corresponding module twin if the Device Update agent is provisioned with a module identity.
-```json
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
-```
+### Add tags using jobs
-### Using jobs
+You can schedule a job on multiple devices to add or update a Device Update tag. For examples of job operations, see [Schedule jobs on multiple devices](../iot-hub/iot-hub-devguide-jobs.md). You can update either device twins or module twins using jobs, depending on whether the Device Update agent is provisioned with a device or module identity.
-It is possible to schedule a job on multiple devices to add or update a Device Update tag. For examples, see [Schedule jobs on multiple devices](../iot-hub/iot-hub-devguide-jobs.md). You can update a device twin or module twin (if the Device Update agent is set up as a module identity) using jobs. For more information, see [Schedule and broadcast jobs](../iot-hub/iot-hub-csharp-csharp-schedule-jobs.md).
+For more information, see [Schedule and broadcast jobs](../iot-hub/iot-hub-csharp-csharp-schedule-jobs.md).
> [!NOTE]
-> This action counts against your IoT Hub messages quota and it is recommended to change only up to 50,000 device or module twin tags at a time otherwise you may need to buy more IoT Hub units if you exceed your daily IoT Hub message quota. Details can be found at [Quotas and throttling](../iot-hub/iot-hub-devguide-quotas-throttling.md#quotas-and-throttling).
+> This action counts against your IoT Hub messages quota. We recommend only changing up to 50,000 device or module twin tags at a time, otherwise you may need to buy more IoT Hub units if you exceed your daily IoT Hub message quota. For more information, see [Quotas and throttling](../iot-hub/iot-hub-devguide-quotas-throttling.md#quotas-and-throttling).
+
+### Add tags by updating twins
-### Direct twin updates
+Tags can also be added or updated directly in device or module twins.
-Tags can also be added or updated in a device twin or module twin directly.
+# [Azure portal](#tab/portal)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT Hub.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
-1. Select **Devices** from the navigation menu and select your IoT device to open its device details.
+2. From **Devices** or **IoT Edge** on the left navigation pane, find your IoT device. Either navigate to the device twin or the Device Update module and then its module twin, depending on whether the Device Update agent is provisioned with a device or module identity.
-1. Open the twin details.
+3. In the twin details, delete any existing Device Update tag value by setting them to null.
- * If the Device Update agent is configured as a device identity, select **Device twin**.
- * If the Device Update agent is configured as a module identity, select the Device Update module and then **Module identity twin**.
+4. Add a new Device Update tag value as shown below.
-1. In the device twin or module twin, delete any existing Device Update tag value by setting them to null.
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
-1. Add a new Device Update tag value as shown below. [Example device twin JSON document with tags.](../iot-hub/iot-hub-devguide-device-twins.md#device-twins)
+# [Azure CLI](#tab/cli)
-```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
+Use [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) or [az iot hub module-twin update](/cli/azure/iot/hub/module-twin#az-iot-hub-module-twin-update) to add a tag to a device or module twin.
+
+For both `update` commands, the `--tags` argument accepts either inline json or a file path to json content.
+
+For example:
+
+```azurecli
+az iot hub module-twin update \
+ --hub-name <IoT Hub name> \
+ --device-id <device name> \
+ --module-id <module name> \
+ --tags '{"ADUGroup": "<custom_tag_value"}'
``` ++ ### Limitations
-* You can add any value to your tag except for ΓÇÿUncategorizedΓÇÖ which is a reserved value.
-* Tag value cannot exceed 255 characters.
-* Tag value can contain alphanumeric characters and the following special characters ".","-","_","~".
-* Tag and Group names are case-sensitive.
-* A device can only have one tag with the name ADUGroup, any subsequent additions of a tag with that name will override the existing value for tag name ADUGroup.
-* One device can only belong to one Group.
+* You can add any value to your tag except for `Uncategorized` and `$default`, which are reserved values.
+* Tag value can't exceed 200 characters.
+* Tag value can contain alphanumeric characters and the following special characters: `. - _ ~`.
+* Tag and group names are case-sensitive.
+* A device can only have one tag with the name ADUGroup. Any additions of a tag with that name will override the existing value for tag name ADUGroup.
+* One device can only belong to one group.
-## Create a device group by selecting an existing IoT Hub tag
+## View device groups
-1. Go to the [Azure portal](https://portal.azure.com).
+Groups are automatically created based on the tags assigned as well as the compatibility properties of the devices. One group can have multiple subgroups with different device classes.
-2. Select the IoT Hub you previously connected to your Device Update instance.
+# [Azure portal](#tab/portal)
-3. Select the **Updates** option under **Device Management** from the left-hand navigation bar.
+1. In the [Azure portal](https://portal.azure.com), navigate to the IoT hub that you previously connected to your Device Update instance.
-4. Select the **Groups and Deployments** tab at the top of the page.
+2. Select the **Updates** option under **Device Management** from the left-hand navigation bar.
+
+3. Select the **Groups and Deployments** tab.
:::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-5. Select **Add group** to create a new group.
+4. Once a group is created, you'll see that the compliance chart and group list are updated. The Device Update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. For more information, see [Device Update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
-6. Select an IoT Hub tag and Device Class from the list and then select **Create group**.
+5. You should see existing groups and any available updates for the devices in those groups in the group list. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. You can deploy the best available update to a group from this view by selecting the **Deploy** button next to the group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+# [Azure CLI](#tab/cli)
-7. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+Use [az iot du device group list](/cli/azure/iot/du/device/group#az-iot-du-device-group-list) to view all device groups in a Device Update instance.
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+```azurecli
+az iot du device group list \
+ --account <Device Update account name> \
+ --instance <Device Update instance name>
+```
+
+You can use the `--order-by` argument to order the groups returned based on aspects like group ID, count of devices, or count of subgroups with new updates available.
-8. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group. See Next Step: Deploy Update for more details.
++
+## View device details for a group
-## View Device details for the group you created
+# [Azure portal](#tab/portal)
-1. Navigate to your newly created group and click on the group name.
+1. From the **Groups and Deployments** tab, select the name of the group that you want to view.
-2. A list of devices that are part of the group will be shown along with their device update properties. In this view, you can also see the update compliance information for all devices that are members of the group. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available and Updates in Progress.
+2. On the group details page you can see a list of devices that are part of the group along with their device update properties. In this view, you can also see the update compliance information for all devices that are members of the group. The compliance chart shows the count of devices in various states of compliance.
:::image type="content" source="media/create-update-group/group-details.png" alt-text="Screenshot of device group details view." lightbox="media/create-update-group/group-details.png":::
-3. You can also click on each individual device within a group to be redirected to the device details page in IoT Hub.
+3. You can also select an individual device within a group to be redirected to the device details page in IoT Hub.
:::image type="content" source="media/create-update-group/device-details.png" alt-text="Screenshot of device details view." lightbox="media/create-update-group/device-details.png":::
-## Next Steps
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device group show](/cli/azure/iot/du/device/group#az-iot-du-device-group-show) to view details of a specific device group.
+
+The optional `--best-updates` flag returns a list of the best available updates for the device group, including a count of how many devices need each update.
+
+The optional `--update-compliance` flag returns compliance information for the device group, including how many devices are on their latest update, how many need new updates, and how many are in progress for a new update.
+
+```azurecli
+az iot du device group show \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <value of the ADUGroup tag for this group>
+```
+++
+## Next Steps
-[Deploy Update](deploy-update.md)
+* [Deploy an update](deploy-update.md)
-[Learn more about device groups](device-update-groups.md)
+* Learn more about [device groups](device-update-groups.md)
-[Learn about update compliance.](device-update-compliance.md)
+* Learn more about [Device Update compliance](device-update-compliance.md)
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md
Title: How to prepare an update to be imported into Azure Device Update for IoT
description: How-To guide for preparing to import a new update into Azure Device Update for IoT Hub. Previously updated : 1/28/2022 Last updated : 10/31/2022
Learn how to obtain a new update and prepare the update for importing into Devic
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
-* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub.
-* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS, and Windows installs)
-* Supported browsers:
- * [Microsoft Edge](https://www.microsoft.com/edge)
- * Google Chrome
+* Access to [an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* An Azure CLI environment:
+
+ * Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ * Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+
+ 1. Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ 2. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ 3. When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
+
+>[!TIP]
+>The Azure CLI commands in this article use the backslash `\` character for line continuation so that the command arguments are easier to read. This syntax works in Bash environments. If you're running these commands in PowerShell, replace each backslash with a backtick `\``, or remove them entirely.
## Obtain an update for your devices
-Now that you've set up Device Update and provisioned your devices, you'll need the update file(s) that you'll be deploying to those devices.
+Now that you've set up Device Update and provisioned your devices, you need the update file(s) that you'll deploy to those devices.
* If youΓÇÖve purchased devices from an Original Equipment Manufacturer (OEM) or solution integrator, that organization will most likely provide update files for you, without you needing to create the updates. Contact the OEM or solution integrator to find out how they make updates available.
-* If your organization already creates software for the devices you use, that same group will be the ones to create the updates for that software.
+* If your organization creates software for the devices you use, that same group will create the updates for that software.
When creating an update to be deployed using Device Update for IoT Hub, start with either the [image-based or package-based approach](understand-device-update.md#support-for-a-wide-range-of-update-artifacts) depending on your scenario. ## Create a basic Device Update import manifest
-Once you have your update files, create an import manifest to describe the update. If you haven't already done so, be sure to familiarize yourself with the basic [import concepts](import-concepts.md). While it is possible to author an import manifest JSON manually using a text editor, this guide will use PowerShell as example.
+Once you have your update files, create an import manifest to describe the update. If you haven't already done so, familiarize yourself with the basic [import concepts](import-concepts.md). While it's possible to author an import manifest JSON manually using a text editor, the Azure Command Line Interface (CLI) simplifies the process greatly, and is used in the examples below.
> [!TIP] > Try the [image-based](device-update-raspberry-pi.md), [package-based](device-update-ubuntu-agent.md), or [proxy update](device-update-howto-proxy-updates.md) tutorials if you haven't already done so. You can also just view sample import manifest files from those tutorials for reference.
-1. [Clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) `Azure/iot-hub-device-update` [Git repository](https://github.com/Azure/iot-hub-device-update).
-
-2. Navigate to `Tools/AduCmdlets` in your local clone from PowerShell.
-
-3. Run the following commands after replacing the following sample parameter values with your own: **Provider, Name, Version, Properties, Handler, Installed Criteria, Files**. See [Import schema and API information](import-schema.md) for details on what values you can use. _In particular, be aware that the same exact set of compatibility properties cannot be used with more than one Provider and Name combination._
+The [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5) command takes the following arguments:
- ```powershell
- Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process
+* `--update-provider`, `--update-name`, and `--update-version`: These three parameters define the **updateId** object that is a unique identifier for each update.
+* `--compat`: The **compatibility** object is a set of name-value pairs that describe the properties of a device that this update is compatible with.
+ * The same exact set of compatibility properties can't be used with more than one provider and name combination.
+* `--step`: The update handler on the device (for example, `microsoft/script:1`, `microsoft/swupdate:1`, or `microsoft/apt:1`) and its associated properties for this update.
+* `--file`: The paths to your update file or files.
- Import-Module ./AduUpdate.psm1
+For more information about these parameters, see [Import schema and API information](import-schema.md).
- $updateId = New-AduUpdateId -Provider Contoso -Name Toaster -Version 1.0
+```azurecli
+az iot du update init v5 \
+ --update-provider <replace with your Provider> \
+ --update-name <replace with your update Name> \
+ --update-version <replace with your update Version> \
+ --compat <replace with the property name>=<replace with the value your device will report> <replace with the property name>=<replace with the value your device will report> \
+ --step handler=<replace with your chosen handler> properties=<replace with any handler properties (JSON-formatted)> \
+ --file path=<replace with path(s) to your update file(s), including the full file name>
+```
- $compat = New-AduUpdateCompatibility -Properties @{ deviceManufacturer = 'Contoso'; deviceModel = 'Toaster' }
+For example:
- $installStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1'-HandlerProperties @{ installedCriteria = '1.0' } -Files 'path to your update file'
+```azurecli
+az iot du update init v5 \
+ --update-provider Microsoft \
+ --update-name AptUpdate \
+ --update-version 1.0.0 \
+ --compat manufacturer=Contoso model=Vacuum \
+ --step handler=handler=microsoft/script:1 properties='{"installedCriteria": "1.0"}' \
+ --file path=/my/apt/manifest/file
+```
- $update = New-AduImportManifest -UpdateId $updateId -Compatibility $compat -InstallationSteps $installStep
+For handler properties, you may need to escape certain characters in your JSON. For example, use `'\'` to escape double-quotes if you're running the Azure CLI in PowerShell.
- # Write the import manifest to a file, ideally next to the update file(s).
- $update | Out-File "./$($updateId.provider).$($updateId.name).$($updateId.version).importmanifest.json" -Encoding utf8
- ```
+The `init` command supports advanced scenarios, including the [related files feature](related-files.md) that allows you to define the relationship between different update files. For more examples and a complete list of optional parameters, see [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5).
-Once you've created your import manifest, if you're ready to import your update, you can scroll to the Next steps link at the bottom of this page.
+Once you've created your import manifest and saved it as a JSON file, if you're ready to [import your update](import-update.md).
## Create an advanced Device Update import manifest for a proxy update
-If your update is more complex, such as a [proxy update](device-update-proxy-updates.md), you may need to create multiple import manifests. You can use the same PowerShell script from the previous section to create parent and child import manifests for complex updates. Run the following commands after replacing the sample parameter values with your own. See [Import schema and API information](import-schema.md) for details on what values you can use.
-
- ```powershell
- Import-Module $PSScriptRoot/AduUpdate.psm1 -ErrorAction Stop
-
- # We will use arbitrary files as update payload files.
- $childFile = "$env:TEMP/childFile.bin.txt"
- $parentFile = "$env:TEMP/parentFile.bin.txt"
- "This is a child update payload file." | Out-File $childFile -Force -Encoding utf8
- "This is a parent update payload file." | Out-File $parentFile -Force -Encoding utf8
-
- #
- # Create a child update
- #
- Write-Host 'Preparing child update ...'
-
- $microphoneUpdateId = New-AduUpdateId -Provider Contoso -Name Microphone -Version $UpdateVersion
- $microphoneCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Microphone
- $microphoneInstallStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1' -Files $childFile
- $microphoneUpdate = New-AduImportManifest -UpdateId $microphoneUpdateId `
- -IsDeployable $false `
- -Compatibility $microphoneCompat `
- -InstallationSteps $microphoneInstallStep `
- -ErrorAction Stop -Verbose:$VerbosePreference
-
- #
- # Create another child update
- #
- Write-Host 'Preparing another child update ...'
-
- $speakerUpdateId = New-AduUpdateId -Provider Contoso -Name Speaker -Version $UpdateVersion
- $speakerCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Speaker
- $speakerInstallStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1' -Files $childFile
- $speakerUpdate = New-AduImportManifest -UpdateId $speakerUpdateId `
- -IsDeployable $false `
- -Compatibility $speakerCompat `
- -InstallationSteps $speakerInstallStep `
- -ErrorAction Stop -Verbose:$VerbosePreference
-
- #
- # Create the parent update which parents the child update above
- #
- Write-Host 'Preparing parent update ...'
-
- $parentUpdateId = New-AduUpdateId -Provider Contoso -Name Toaster -Version $UpdateVersion
- $parentCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Toaster
- $parentSteps = @()
- $parentSteps += New-AduInstallationStep -Handler 'microsoft/script:1' -Files $parentFile -HandlerProperties @{ 'arguments'='--pre'} -Description 'Pre-install script'
- $parentSteps += New-AduInstallationStep -UpdateId $microphoneUpdateId -Description 'Microphone Firmware'
- $parentSteps += New-AduInstallationStep -UpdateId $speakerUpdateId -Description 'Speaker Firmware'
- $parentSteps += New-AduInstallationStep -Handler 'microsoft/script:1' -Files $parentFile -HandlerProperties @{ 'arguments'='--post'} -Description 'Post-install script'
-
- $parentUpdate = New-AduImportManifest -UpdateId $parentUpdateId `
- -Compatibility $parentCompat `
- -InstallationSteps $parentSteps `
- -ErrorAction Stop -Verbose:$VerbosePreference
-
- #
- # Write all to files
- #
- Write-Host 'Saving manifest and update files ...'
-
- New-Item $Path -ItemType Directory -Force | Out-Null
-
- $microphoneUpdate | Out-File "$Path/$($microphoneUpdateId.Provider).$($microphoneUpdateId.Name).$($microphoneUpdateId.Version).importmanifest.json" -Encoding utf8
- $speakerUpdate | Out-File "$Path/$($speakerUpdateId.Provider).$($speakerUpdateId.Name).$($speakerUpdateId.Version).importmanifest.json" -Encoding utf8
- $parentUpdate | Out-File "$Path/$($parentUpdateId.Provider).$($parentUpdateId.Name).$($parentUpdateId.Version).importmanifest.json" -Encoding utf8
-
- Copy-Item $parentFile -Destination $Path -Force
- Copy-Item $childFile -Destination $Path -Force
-
- Write-Host "Import manifest JSON files saved to $Path" -ForegroundColor Green
-
- Remove-Item $childFile -Force -ErrorAction SilentlyContinue | Out-Null
- Remove-Item $parentFile -Force -ErrorAction SilentlyContinue | Out-Null
- ```
+If your update is more complex, such as a [proxy update](device-update-proxy-updates.md), you may need to create multiple import manifests. You can use the same Azure CLI approach from the previous section to create both a _parent_ import manifest and some number of _child_ import manifests for complex updates. Run the following Azure CLI commands after replacing the sample parameter values with your own. See [Import schema and API information](import-schema.md) for details on what values you can use. In the example below, there are three updates to be deployed to the device: one parent update and two child updates:
+
+```azurecli
+az iot du update init v5 \
+ --update-provider <replace with child_1 update Provider> \
+ --update-name <replace with child_1 update Name> \
+ --update-version <replace with child_1 update Version> \
+ --compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report> \
+ --step handler=<replace with your chosen handler> \
+ --file path=<replace with path(s) to your update file(s), including the full file name> \
+az iot du update init v5 \
+ --update-provider <replace with child_2 update Provider> \
+ --update-name <replace with child_2 update Name> \
+ --update-version <replace with child_2 update Version> \
+ --compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report> \
+ --step handler=<replace with your chosen handler> \
+ --file path=<replace with path(s) to your update file(s), including the full file name> \
+az iot du update init v5 \
+ --update-provider <replace with the parent update Provider> \
+ --update-name <replace with the parent update Name> \
+ --update-version <replace with the parent update Version> \
+ --compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report> \
+ --step handler=<replace with your chosen handler> properties=<replace with any desired handler properties (JSON-formatted)> \
+ --file path=<replace with path(s) to your update file(s), including the full file name> \
+ --step updateId.provider=<replace with child_1 update provider> updateId.name=<replace with child_1 update name> updateId.version=<replace with child_1 update version> \
+ --step updateId.provider=<replace with child_2 update provider> updateId.name=<replace with child_2 update name> updateId.version=<replace with child_2 update version> \
+```
## Next steps
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
+
+ Title: Understand Device Update for Azure IoT Hub delta update capabilities | Microsoft Docs
+description: Key concepts for using delta (differential) updates with Device Update for IoT Hub.
++ Last updated : 08/24/2022++++
+# How to understand and use delta updates in Device Update for IoT Hub (Preview)
+
+Delta updates allow you to generate a small update that represents only the changes between two full updates - a source image and a target image. This approach is ideal for reducing the bandwidth used to download an update to a device, particularly if there have been only a few changes between the source and target updates.
+
+>[!NOTE]
+>The delta update feature is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Requirements for using delta updates in Device Update for IoT Hub
+
+- The source and target update files must be SWU (SWUpdate) format.
+- Within each SWUpdate file, there must be a raw image that uses the Ext2, Ext3, or Ext4 filesystem. That image can be compressed with gzip or zstd.
+- The delta generation process recompresses the target SWU update using zstd compression in order to produce an optimal delta. You'll import this recompressed target SWU update to the Device Update service along with the generated delta update file.
+- Within SWUpdate on the device, zstd decompression must also be enabled.
+ - This requires using [SWUpdate 2019.11](https://github.com/sbabic/swupdate/releases/tag/2019.11) or later.
+
+## Configure a device with Device Update agent and delta processor component
+
+In order for your device to download and install delta updates from the Device Update service, you will need several components present and configured.
+
+### Device Update agent
+
+The Device Update agent _orchestrates_ the update process on the device, including download, install, and restart actions. Add the Device Update agent to a device and configure it for use. You'll need the 1.0 or later version of the agent. For instructions, see [Device Update agent provisioning](device-update-agent-provisioning.md).
+
+### Update handler
+
+An update handler integrates with the Device Update agent to perform the actual update install. For delta updates, start with the [`microsoft/swupdate:2` update handler](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/step_handlers/swupdate_handler_v2/README.md) if you don't already have your own SWUpdate update handler that you want to modify. **If you use your own update handler, be sure to enable zstd decompression in SWUpdate**.
+
+### Delta processor
+
+The delta processor re-creates the original SWU image file on your device after the delta file has been downloaded, so your update handler can install the SWU file. You'll find all the delta processor code in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
+
+To add the delta processor component to your device image and configure it for use, use apt-get to install the proper Debian package for your platform (it should be named `ms-adu_diffs_x.x.x_amd64.deb` for amd64):
+
+```bash
+sudo apt-get install <path to Debian package>
+```
+
+Alternatively, on a non-Debian Linux device you can install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
+
+```bash
+sudo cp <path to libadudiffapi.so> /usr/lib/libadudiffapi.so
+sudo ldconfig
+```
+
+## Add a source SWU image file to your device
+
+After a delta update has been downloaded to a device, it must be compared against a valid _source SWU file_ that has been previously cached on the device in order to be re-created into a full image. The simplest way to populate this cached image is to deploy a full image update to the device via the Device Update service (using the existing [import](import-update.md) and [deployment](deploy-update.md) processes). As long as the device has been configured with the Device Update agent (version 1.0 or later) and delta processor, the installed SWU file will be cached automatically by the Device Update agent for later delta update use.
+
+If you instead want to directly pre-populate the source image on your device, the path where the image is expected is:
+
+`[BASE_SOURCE_DOWNLOAD_CACHE_PATH]/sha256-[ENCODED HASH]`
+
+By default, `BASE_SOURCE_DOWNLOAD_CACHE_PATH` is the path listed below. The `[provider]` value is the Provider part of the [updateId](import-concepts.md#update-identity) for the source SWU file.
+
+`/var/lib/adu/sdc/[provider]`
+
+`ENCODED_HASH` is the base64 hex string of the SHA256 of the binary, but after encoding to base64 hex string, it encodes the characters as follows:
+
+- `+` encoded as `octets _2B`
+- `/` encoded as `octets _2F`
+- `=` encoded as `octets _3D`
+
+## Generate delta updates using the DiffGen tool
+
+### Environment prerequisites
+
+Before creating deltas with DiffGen, several things need to be downloaded and/or installed on the environment machine. We recommend a Linux environment and specifically Ubuntu 20.04 (or WSL if natively on Windows).
+
+The following table provides a list of the content needed, where to retrieve them, and the recommended installation if necessary:
+
+| Binary Name | Where to acquire | How to install |
+|--|--|--|
+| DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) Github repo | Select _Microsoft.Azure.DeviceUpdate.Diffs_ under the Packages section on the right side of the page. From there you can install from the cmd line or select _package.nupkg_ under the Assets section on the right side of the page to download the package. [Learn more about NuGet packages](https://learn.microsoft.com/nuget/).|
+| .NET (Runtime) | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. |
+
+### Dependencies
+
+The zstd_compression_tool is used for decompressing an archive's image files and recompressing them with zstd. This process ensures that all archive files used for diff generation have the same compression algorithm for the images inside the archives.
+
+Commands to install required packages/libraries:
+
+```bash
+sudo apt update
+sudo apt-get install -y python3 python3-pip
+sudo pip3 install libconf zstandard
+```
+
+### Create a delta update using DiffGen
+
+The DiffGen tool is run with several arguments. All arguments are required, and overall syntax is as follows:
+
+`DiffGenTool [source_archive] [target_archive] [output_path] [log_folder] [working_folder] [recompressed_target_archive]`
+
+- The script recompress_tool.py will be run to create the file [recompressed_target_archive], which will then be used instead of [target_archive] as the target file for creating the diff.
+- The image files within [recompressed_target_archive] will be compressed with zstd.
+
+If your SWU files are signed (likely), you'll need another argument as well:
+
+`DiffGenTool [source_archive] [target_archive] [output_path] [log_folder] [working_folder] [recompressed_target_archive] "[signing_command]"`
+
+- In addition to using [recompressed_target_archive] as the target file, providing a signing command string parameter will run recompress_and_sign_tool.py to create the file [recompressed_target_archive] and have the sw-description file within the archive signed (meaning a sw-description.sig file will be present).
+
+The following table describes the arguments in more detail:
+
+| Argument | Description |
+|--|--|
+| [source_archive] | This is the image that the delta will be based against when creating the delta. _Important_: this image must be identical to the image that is already present on the device (for example, cached from a previous update). |
+| [target_archive] | This is the image that the delta will update the device to. |
+| [output_path] | The path (including the desired name of the delta file being generated) on the host machine where the delta file will be placed after creation. If the path doesn't exist, the directory will be created by the tool. |
+| [log_folder] | The path on the host machine where logs will be created. We recommend defining this location as a sub folder of the output path. If the path doesn't exist, it will be created by the tool. |
+| [working_folder] | The path on the machine where collateral and other working files are placed during the delta generation. We recommend defining this location as a subfolder of the output path. If the path doesn't exist, it will be created by the tool. |
+| [recompressed_target_archive] | The path on the host machine where the recompressed target file will be created. This file will be used instead of <target_archive> as the target file for diff generation. If this path exists before calling DiffGenTool, the path will be overwritten. We recommend defining this path as a file in the subfolder of the output path. |
+| "[signing_command]" _(optional)_ | A customizable command used for signing the sw-description file within the recompressed archive file. The sw-description file in the recompressed archive is used as an input parameter for the signing command; DiffGenTool expects the signing command to create a new signature file, using the name of the input with `.sig` appended. Surrounding the parameter in double quotes is needed so that the whole command is passed in as a single parameter. Also, avoid putting the '~' character in a key path used for signing, and use the full home path instead (for example, use /home/USER/keys/priv.pem instead of ~/keys/priv.pem). |
+
+### DiffGen examples
+
+In the examples below, we're operating out of the /mnt/o/temp directory (in WSL):
+
+_Creating diff between input source file and recompressed target file:_
+
+```bash
+sudo ./DiffGenTool
+/mnt/o/temp/[source file.swu]
+/mnt/o/temp/[target file.swu]
+/mnt/o/temp/[delta file to be created]
+/mnt/o/temp/logs
+/mnt/o/temp/working
+/mnt/o/temp/[recompressed file to be created.swu]
+```
+
+If you're also using the signing parameter (needed if your SWU file is signed), you can use the sample `sign_file.sh` script from the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta/tree/main/src/scripts/signing_samples/openssl_wrapper) GitHub repo. First, open the script and edit it to add the path to your private key file. Save the script, and then run DiffGen as follows:
+
+_Creating diff between input source file and recompressed/re-signed target file:_
+
+```bash
+sudo ./DiffGenTool
+/mnt/o/temp/[source file.swu]
+/mnt/o/temp/[target file.swu]
+/mnt/o/temp/[delta file to be created]
+/mnt/o/temp/logs
+/mnt/o/temp/working
+/mnt/o/temp/[recompressed file to be created.swu]
+/mnt/o/temp/[path to script]/sign_file.sh
+```
+
+## Import the generated delta update
+
+### Generate import manifest
+
+The basic process of importing an update to the Device Update service is unchanged for delta updates, so if you haven't already, be sure to review this page: [How to prepare an update to be imported into Azure Device Update for IoT Hub](create-update.md)
+
+The first step to import an update into the Device Update service is always to create an import manifest if you don't already have one. For more information about import manifests, see [Importing updates into Device Update](import-concepts.md#import-manifest). The delta update feature uses a new capability called [Related Files](related-files.md), which requires an import manifest that is version 5 or later.
+
+To create an import manifest for your delta update using the Related Files feature, you'll need to add [relatedFiles](import-schema.md#relatedfiles-object) and [downloadHandler](import-schema.md#downloadhandler-object) elements to your import manifest.
+
+The `relatedFiles` element is used to specify information about the delta update file, including the file name, file size and sha256 hash (examples available at the link above). Importantly, you also need to specify two properties which are unique to the delta update feature:
+
+```json
+"properties": {
+ "microsoft.sourceFileHashAlgorithm": "sha256",
+ "microsoft.sourceFileHash": "[insert the source SWU image file hash]"
+}
+```
+Both of the properties above are specific to your _source SWU image file_ that you used as an input to the DiffGen tool when creating your delta update. The information about the source SWU image is needed in your import manifest even though you will not actually be importing the source image. The delta components on the device use this metadata about the source image to locate the image on the device once the delta has been downloaded.
+
+The `downloadHandler` element is used to specify how the Device Update agent will orchestrate the delta update, using the Related Files feature. Unless you are customizing your own version of the Device Update agent for delta functionality, you should only use this downloadHandler:
+
+```json
+"downloadHandler": {
+ "id": "microsoft/delta:1"
+}
+```
+You can use the Azure Command Line Interface (CLI) to generate an import manifest for your delta update. If you haven't used the Azure CLI to create an import manifest before, refer to [these instructions](create-update.md#create-a-basic-device-update-import-manifest).
+
+```azurecli
+ az iot du update init v5
+--update-provider <replace with your Provider> --update-name <replace with your update Name> --update-version <replace with your update Version>
+--compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report>
+--step handler=microsoft/swupdate:2 properties=<replace with any desired handler properties (JSON-formatted), such as '{"installedCriteria": "1.0"}'>
+--file path=<replace with path(s) to your update file(s), including the full file name> downloadHandler=microsoft/delta:1
+--related-file path=<replace with path(s) to your delta file(s), including the full file name> properties='{"microsoft.sourceFileHashAlgorithm": "sha256", "microsoft.sourceFileHash": "<replace with the source SWU image file hash>"}'
+```
+
+Save your generated import manifest JSON to a file with the extension `.importmanifest.json`
+
+### Import using the Azure portal
+
+Once you've created your import manifest, you're ready to import the delta update. To import, follow the instructions in [Add an update to Device Update for IoT Hub](import-update.md#import-an-update). You must include these items when importing:
+
+- The import manifest .json file you created in the previous step.
+- The _recompressed_ target SWU image created when you ran the DiffGen tool.
+- The delta file created when you ran the DiffGen tool.
+
+## Deploy the delta update to your devices
+
+When you deploy a delta update, the experience in the Azure portal looks identical to deploying a regular image update. For more information on deploying updates, see [Deploy an update by using Device Update for Azure IoT Hub](deploy-update.md)
+
+Once you've created the deployment for your delta update, the Device Update service and client automatically identify if there's a valid delta update for each device you're deploying to. If a valid delta is found, the delta update will be downloaded and installed on that device. If there's no valid delta update found, the full image update (the recompressed target SWU image) will be downloaded instead as a fallback. This approach ensures that all devices you're deploying the update to will get to the appropriate version.
+
+There are three possible outcomes for a delta update deployment:
+
+- Delta update installed successfully. Device is on new version.
+- Delta update was unavailable or failed to install, but a successful fallback install of the full image occurred instead. Device is on new version.
+- Both delta and fallback to full image failed. Device is still on old version.
+
+To determine which of the above outcomes occurred, you can view the install results with error code and extended error code by selecting any device that is in a failed state. You can also [collect logs](device-update-log-collection.md) from multiple failed devices if needed.
+
+If the delta update succeeded, the device will show a "Succeeded" status.
+
+If the delta update failed but did a successful fallback to the full image, it will show the following error status:
+
+- resultCode: _[value greater than 0]_
+- extendedResultCode: _[non-zero]_
+
+If the update was unsuccessful, it will show an error status that can be interpreted using the instructions below:
+
+- Start with the Device Update Agent errors in [result.h](https://github.com/Azure/iot-hub-device-update/blob/main/src/inc/aduc/result.h).
+
+ - Errors from the Device Update Agent that are specific to the Download Handler functionality used for delta updates begin with 0x9:
+
+ | Component | Decimal | Hex | Note |
+ |--|--|--|--|
+ | EXTENSION_MANAGER | 0 | 0x00 | Indicates errors from extension manager download handler logic. Example: 0x900XXXXX |
+ | PLUGIN | 1 | 0x01 | Indicates errors with usage of download handler plugin shared libraries. Example: 0x901XXXXX |
+ | RESERVED | 2 - 7 | 0x02 - 0x07 | Reserved for Download handler. Example: 0x902XXXXX |
+ | COMMON | 8 | 0x08 | Indicates errors in Delta Download Handler extension top-level logic. Example: 0x908XXXXX |
+ | SOURCE_UPDATE_CACHE | 9 | 0x09 | Indicates errors in Delta Download handler extension Source Update Cache. Example: 0x909XXXXX |
+ | DELTA_PROCESSOR | 10 | 0x0A | Error code for errors from delta processor API. Example: 0x90AXXXXX |
+
+ - If the error code isn't present in [result.h](https://github.com/Azure/iot-hub-device-update/blob/main/src/inc/aduc/result.h), it's likely an error in the delta processor component (separate from the Device Update agent). If so, the extendedResultCode will be a negative decimal value of the following hexadecimal format: 0x90AXXXXX
+
+ - 9 is "Delta Facility"
+ - 0A is "Delta Processor Component" (ADUC_COMPONENT_DELTA_DOWNLOAD_HANDLER_DELTA_PROCESSOR)
+ - XXXXX is the 20-bit error code from FIT delta processor
+
+- If you aren't able to solve the issue based on the error code information, file a GitHub issue to get further assistance.
+
+## Next steps
+
+[Troubleshoot common issues](troubleshoot-device-update.md)
iot-hub-device-update Deploy Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/deploy-update.md
Title: Deploy an update by using Device Update for Azure IoT Hub | Microsoft Doc
description: Deploy an update by using Device Update for Azure IoT Hub. Previously updated : 2/11/2021 Last updated : 10/31/2022
Learn how to deploy an update to an IoT device by using Device Update for Azure
## Prerequisites
-* [Access to an IoT hub with Device Update for IoT Hub enabled](create-device-update-account.md). We recommend that you use an S1 (Standard) tier or above for your IoT Hub instance.
-* [At least one update has been successfully imported for the provisioned device](import-update.md).
+* Access to [an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). We recommend that you use an S1 (Standard) tier or above for your IoT Hub.
+* An [imported update for the provisioned device](import-update.md).
* An IoT device (or simulator) provisioned for Device Update within IoT Hub.
-* [The device is part of at least one default group or user-created update group](create-update-group.md).
-* Supported browsers:
- * [Microsoft Edge](https://www.microsoft.com/edge)
- * Google Chrome
+* The device is part of at least one default group or [user-created update group](create-update-group.md).
+
+# [Azure portal](#tab/portal)
+
+Supported browsers:
+
+* [Microsoft Edge](https://www.microsoft.com/edge)
+* Google Chrome
+
+# [Azure CLI](#tab/cli)
+
+An Azure CLI environment:
+
+* Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+* Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+
+ 1. Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ 2. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ 3. When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
+
+>[!TIP]
+>The Azure CLI commands in this article use the backslash `\` character for line continuation so that the command arguments are easier to read. This syntax works in Bash environments. If you're running these commands in PowerShell, replace each backslash with a backtick `\``, or remove them entirely.
++ ## Deploy the update
-1. Go to the [Azure portal](https://portal.azure.com).
+# [Azure portal](#tab/portal)
-1. Go to the **Device Update** pane of your IoT Hub instance.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
+
+1. Select **Updates** from the navigation menu to open the **Device Update** page of your IoT Hub instance.
:::image type="content" source="media/deploy-update/device-update-iot-hub.png" alt-text="Screenshot that shows the Get started with the Device Update for IoT Hub page." lightbox="media/deploy-update/device-update-iot-hub.png":::
-1. Select the **Groups and Deployments** tab at the top of the page. [Learn more](device-update-groups.md) about device groups.
+1. Select the **Groups and Deployments** tab at the top of the page. For more information, see [Device groups](device-update-groups.md).
:::image type="content" source="media/deploy-update/updated-view.png" alt-text="Screenshot that shows the Groups and Deployments tab." lightbox="media/deploy-update/updated-view.png":::
-1. View the update compliance chart and groups list. You should see a new update available for your device group listed under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. View the update compliance chart and group list. You should see a new update available for your tag based or default group. You might need to refresh once. For more information, see [Device Update compliance](device-update-compliance.md).
-1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
+1. Select Deploy next to the **one or more updates available**, and confirm that the descriptive label you added when importing is present and looks correct.
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows the Group details." lightbox="media/deploy-update/group-basics.png":::
+1. Confirm that the correct group is selected as the target group and select **Deploy**.
-1. To start the deployment, go to the **Current deployment** tab. Select the deploy link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
+1. To start the deployment, go to the **Current deployment** tab. Select the **Deploy** link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
:::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows Best highlighted." lightbox="media/deploy-update/select-update.png":::
-1. Schedule your deployment to start immediately or in the future. Then select **Create**.
+1. Schedule your deployment to start immediately or in the future.
> [!TIP] > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier. :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows the Create deployment screen" lightbox="media/deploy-update/create-deployment.png":::
-1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
+1. Create an automatic rollback policy if needed. Then select **Create**.
+
+1. In the deployment details, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
:::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows deployment as Active." lightbox="media/deploy-update/deployment-active.png":::
Learn how to deploy an update to an IoT device by using Device Update for Azure
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows the update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-## Monitor the update deployment
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device deployment create](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-create) to create a deployment for a device group.
+
+The `device deployment create` command takes the following arguments:
+
+* `--account`: The Device Update account name.
+* `--instance`: The Device Update instance name.
+* `--group-id`: The device group ID that you're targeting with this deployment. This ID is the value of the **ADUGroup** tag, or `$default` for devices with no tag.
+* `--deployment-id`: An ID to identify this deployment.
+* `--update-name`, `--update-provider`, and `--update-version`: These three parameters define the **updateId** object that is a unique identifier for the update that you're using in this deployment.
+
+```azurecli
+az iot du device deployment create \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group id> \
+ --deployment-id <deployment id> \
+ --update-name <update name> \
+ --update-provider <update provider> \
+ --update-version <update version>
+```
+
+Optional arguments allow you to configure the deployment. For the full list, see [Optional parameters](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-create-optional-parameters)
+
+If you want to create an automatic rollback policy, add the following parameters:
+
+* `--failed-count`: The number of failed devices in a deployment that will trigger a rollback.
+* `--failed-percentage`: The percentage of failed devices in a deployment that will trigger a rollback.
+* `--rollback-update-name`, `--rollback-update-provider`, `--rollback-update-version`: The updateID for the update that the device group will use if a rollback is initiated.
+
+```azurecli
+az iot du device deployment create \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group id> \
+ --deployment-id <deployment id> \
+ --update-name <update name> \
+ --update-provider <update provider> \
+ --update-version <update version> \
+ --failed-count 10 \
+ --failed-percentage 5 \
+ --rollback-update-name <rollback update name> \
+ --rollback-update-provider <rollback update provider> \
+ --rollback-update-version <rollback update version>
+```
+
+If you want the deployment to start in the future, use the `--start-time` parameter to provide the target datetime for the deployment.
+
+```azurecli
+az iot du device deployment create \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group id> \
+ --deployment-id <deployment id> \
+ --update-name <update name> \
+ --update-provider <update provider> \
+ --update-version <update version> \
+ --start-time "2022-12-20T01:00:00"
+```
+++
+## Monitor an update deployment
+
+# [Azure portal](#tab/portal)
-1. Select the **Deployment history** tab at the top of the page.
+1. Select the group you deployed to, and go to the **Current updates** or **Deployment history** tab to confirm that the deployment is in progress
:::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows the Deployment history tab." lightbox="media/deploy-update/deployments-history.png":::
-1. Select **Details** next to the deployment you created.
+1. Select **Details** next to the deployment you created. Here you can view the deployment details, update details, and target device class details. You can optionally add a friendly name for the device class.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows deployment details." lightbox="media/deploy-update/deployment-details.png"::: 1. Select **Refresh** to view the latest status details.
+1. You can go to the group basics view to search the status for a particular device, or filter to view devices that have failed the deployment
+
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device deployment list](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-list) to view all deployment for a device group.
+
+```azurecli
+az iot du device deployment list \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group id>
+```
+
+Use [az iot du device deployment show](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-show) to view the details of a particular deployment.
+
+```azurecli
+az iot du device deployment show \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group ID> \
+ --deployment-id <deployment ID>
+```
+
+Add the `--status` flag to return information about how many devices in the deployment are in progress, completed, or failed.
+
+```azurecli
+az iot du device deployment show \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --group-id <device group ID> \
+ --deployment-id <deployment ID> \
+ --status
+```
+++ ## Retry an update deployment If your deployment fails for some reason, you can retry the deployment for failed devices.
+# [Azure portal](#tab/portal)
+ 1. Go to the **Current deployment** tab on the **Group details** screen. :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows the deployment as Active." lightbox="media/deploy-update/deployment-active.png"::: 1. Select **Retry failed devices** and acknowledge the confirmation notification.
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device deployment retry](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-retry) to retry a deployment for a target subgroup of devices.
+
+This command takes the `--class-id` argument, which is generated from the model ID and compatibility properties reported by the device update agent.
+
+```azurecli
+az iot du device deployment retry \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --deployment-id <deployment ID> \
+ --group-id <device group ID> \
+ --class-id <device class ID>
+```
+++ ## Next steps [Troubleshoot common issues](troubleshoot-device-update.md)
iot-hub-device-update Device Update Agent Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-check.md
+
+ Title: Device Update for Azure IoT Hub agent check | Microsoft Docs
+description: Device Update for IoT Hub uses Agent Check to find and diagnose missing devices.
++ Last updated : 10/31/2022++++
+# Find and fix devices missing from Device Update for IoT Hub using agent check
+
+Learn how to use the **agent check** feature to find, diagnose, and fix devices missing from your Device Update for IoT Hub instance.
+
+## Prerequisites
+
+* Access to [an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) and reporting a compatible Plug and Play (PnP) model ID.
+
+> [!NOTE]
+> The agent check feature can only perform validation checks on devices that have the Device Update agent installed and are reporting a PnP model ID that matches those compatible with Device Update for IoT Hub.
+
+# [Azure portal](#tab/portal)
+
+Supported browsers:
+
+* [Microsoft Edge](https://www.microsoft.com/edge)
+* Google Chrome
+
+# [Azure CLI](#tab/cli)
+
+An Azure CLI environment:
+
+* Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+* Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+
+ 1. Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ 2. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ 3. When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
+++
+## Validation checks supported by agent check
+
+The agent check feature currently performs the following validation checks on all devices that meet the above pre-requisites.
+
+| Validation check | Criteria |
+|||
+| PnP model ID | The PnP model ID is a string that is reported by the Device Update agent to the device twin that describes what PnP model should be used for device/cloud communication. This string must be a valid digital twin model identifier (DTMI) that supports the Device Update interface. |
+| Interface ID | The interface ID is a string that is reported by the Device Update agent to the device twin that describes what Device Update interface version should be used for device/cloud communication. This string must be a valid DTMI that supports the Device Update interface. |
+| Compatibility property names | `CompatPropertyNames` is a field reported by the Device Update agent to the device twin that describes what `deviceProperties` fields should be used to determine the deviceΓÇÖs compatibility with a given deployment. This field's value must be a string of comma-delimited names. The string must contain at least one and no more than five names. Each name must be <32 characters. |
+| Compatibility property values | Compatibility property values are the field:value pairs specified by the `compatPropertyNames` field and reported by the Device Update agent to the device twin as `deviceProperties`. Every name defined in compatibility property names must have a corresponding field:value pair reported. The value for each pair is limited to 64 characters. |
+| ADU group | The ADU Group tag is an optional tag that is defined in the deviceΓÇÖs device twin and determines what device group the device belongs to. If specified, the tag string is limited to 255 characters and may only contain alphanumeric characters and the following special characters: "." "-" "_" "~" |
+
+If a device fails any of these criteria, it may not show up properly in Device Update. Correcting the invalid value to meet the specified criteria should cause the device to properly appear in Device Update. If the device doesn't show up in Device Update **nor** in agent check, you may need to run device sync to resolve the issue.
+
+## View agent check results
+
+# [Azure portal](#tab/portal)
+
+The results of agent check can be found in the diagnostics tab of Device Update.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
+1. Select **Updates** from the navigation menu, then select the **Diagnostics** tab.
+1. Expand the **Find missing devices** section.
+
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device health list](/cli/azure/iot/du/device/health#az-iot-du-device-health-list) to view the health of your devices.
+
+The `device health list` command takes the following arguments:
+
+* `--account`: The name of the Device Update account.
+* `--instance`: The name of the Device Update instance.
+* `--filter`: A device health filter, either filtering on device state or device ID. For example:
+ * To list all healthy devices, filter by `state eq 'Healthy'`
+ * To list all unhealthy devices, filter by `state eq 'Unhealthy'`
+ * To show the health state of a target device, filter by `deviceId eq '<device_name>'` or `deviceId eq '<device_name>' and moduleId eq '<module_name>'`
+
+```azurecli
+az iot du device health list --account <Device Update account name> --instance <Device Update instance name> --filter "<filter query>"
+```
+++
+## Initiate a device sync operation
+
+Device sync should be triggered if a device has been registered in IoT hub but isn't showing up in Device Update nor in agent check results.
+
+Only one device sync operation may be active at a time for each Device Update instance.
+
+# [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
+1. Select **Updates** from the navigation menu, then select the **Diagnostics** tab.
+1. Expand the **View device health section**.
+1. Select **Start a device sync**.
+
+# [Azure CLI](#tab/cli)
+
+Use [az iot du device import](/cli/azure/iot/du/device#az-iot-du-device-import) to import devices and modules to the Device Update instance from a linked IoT hub.
+
+```azurecli
+az iot du device import --account <Device Update account name> --instance <Device Update instance name>
+```
+++
+## Next steps
+
+To learn more about Device Update's diagnostic capabilities, see [Device update diagnostic feature overview](device-update-diagnostics.md)
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md
Title: Understand Device Update for Azure IoT Hub Agent| Microsoft Docs description: Understand Device Update for Azure IoT Hub Agent.-- Previously updated : 2/12/2021++ Last updated : 9/12/2022
The Device Update agent consists of two conceptual layers:
## The interface layer
-The interface layer is made up of the [Device Update core interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device information interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
+The interface layer is made up of the [Device Update core interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface), [Device information interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface) and [Diagnostic information interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/diagnostics_component/diagnostics_interface).
These interfaces rely on a configuration file for the device specific values that need to be reported to the Device Update services. For more information, see [Device Update configuration file](device-update-configuration-file.md). ### Device Update core interface
-The *Device Update core interface* is the primary communication channel between the Device Update agent and services. For more information, see [Device Update core interface](device-update-plug-and-play.md#device-update-core-interface).
+The *Device Update interface* is the primary communication channel between the Device Update agent and services. For more information, see [Device Update core interface](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/azure/iot/deviceupdate-1.json).
### Device information interface
-The *device information interface* is used to implement the `Azure IoT PnP DeviceInformation` interface. For more information, see [Device information interface](device-update-plug-and-play.md#device-information-interface).
+The *device information interface* is used to implement the `Azure IoT PnP DeviceInformation` interface. For more information, see [Device information interface](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/azure/devicemanagement/deviceinformation-1.json).
+
+### Diagnostic information interface
+
+The *diagnostic information interface* is used to enable [remote log collection](device-update-diagnostics.md#remote-log-collection) for diagnostics. For more information, see [Device information interface](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/azure/devicemanagement/deviceinformation-1.json).
## The platform Layer
If you choose to implement with your own downloader in place of Delivery Optimiz
Update handlers are used to invoke installers or commands to do an over-the-air update. You can either use [existing update content handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or [implement a custom content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) that can invoke any installer and execute the over-the-air update needed for your use case.
-## Updating to latest Device Update agent
-
-We have added many new capabilities to the Device Update agent in the latest public preview refresh agent (version 0.8.0). For more information, see the [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md).
+## Changes to Device Update agent at GA release
-If you're using the Device Update agent versions 0.6.0 or 0.7.0, please migrate to the latest agent version 0.8.0. For more information, see [Migrate devices and groups to public preview refresh](migration-pp-to-ppr.md).
+If you are using the Device Update agent versions, please migrate to the latest agent version 1.0.0 which is the GA version. See [GA agent for changes and how to upgrade](migration-public-preview-refresh-to-ga.md)
-You can check the installed version of the Device Update agent and the Delivery Optimization agent in the device properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). For more information, see [device properties of the Device Update core interface](device-update-plug-and-play.md#device-properties).
+You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
## Next Steps
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
Title: Provisioning Device Update for Azure IoT Hub Agent| Microsoft Docs description: Provisioning Device Update for Azure IoT Hub Agent-- Previously updated : 1/26/2022++ Last updated : 8/26/2022
The Device Update Module agent can run alongside other system processes and [IoT Edge modules](../iot-edge/iot-edge-modules.md) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
-## Changes to Device Update agent at Public Preview Refresh
+## Changes to Device Update agent at GA release
-We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0). See [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md) for details.
-
-If you are using the Device Update agent versions 0.6.0 or 0.7.0 please migrate to the latest agent version 0.8.0. See [Public Preview Refresh agent for changes and how to upgrade](migration-pp-to-ppr.md)
+If you are using the Device Update agent versions, please migrate to the latest agent version 1.0.0 which is the GA version. See [GA agent for changes and how to upgrade](migration-public-preview-refresh-to-ga.md)
You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
Follow these instructions to provision the Device Update agent on [IoT Edge enab
``` ```shell
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ sudo apt-get install deviceupdate-agent
``` - For any 'rc' i.e. release candidate agent versions from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) : Download the .deb file to the machine you want to install the Device Update agent on, then:
Follow these instructions to provision the Device Update agent on [IoT Edge enab
```shell sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEB FILE NAME>" ```
+ - If you are setting up a [MCC for a disconnected device scenario](connected-cache-disconnected-device-update.md), then install the Delivery Optmization Apt plugin:
+
+ ```shell
+ sudo apt-get install deliveryoptimization-plugin-apt
+ ```
1. You are now ready to start the Device Update agent on your IoT Edge device.
iot-hub-device-update Device Update Configuration File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configuration-file.md
Title: Understand Device Update for Azure IoT Hub Configuration File| Microsoft Docs description: Understand Device Update for Azure IoT Hub Configuration File.-- Previously updated : 06/27/2022++ Last updated : 08/27/2022
The Device Update agent gets its configuration information from the `du-config.j
* AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"] * DeviceInformation.manufacturer * DeviceInformation.model
+* additionalProperties
* connectionData * connectionType
When installing Debian agent on an IoT Device with a Linux OS, modify the `/etc/
## List of fields
-| Name | Description |
+| Name |Description |
|--|--| | SchemaVersion | The schema version that maps the current configuration file format version. | | aduShellTrustedUsers | The list of users that can launch the **adu-shell** program. Note, adu-shell is a broker program that does various update actions as 'root'. The Device Update default content update handlers invoke adu-shell to do tasks that require super user privilege. Examples of tasks that require this privilege are `apt-get install` or executing a privileged script. | | aduc_manufacturer | Reported by the **AzureDeviceUpdateCore:4.ClientMetadata:4** interface to classify the device for targeting the update deployment. | | aduc_model | Reported by the **AzureDeviceUpdateCore:4.ClientMetadata:4** interface to classify the device for targeting the update deployment. |
+| iotHubProtocol| Accepted values are `mqtt` or `mqtt/ws` to change the protocol used to connect with IoT hub. Default value is 'mqtt' |
+| compatPropertyNames | These properties are used to check for compatibility of the device to target the update deployment|
+| additionalProperties | Optional field. Additional device reported properties can be set and used for comaptibility checking . Limited to five device properties |
| connectionType | Accepted values are `string` or `AIS`. Use `string` when connecting the device to IoT Hub manually for testing purposes. For production scenarios, use `AIS` when using the IoT Identity Service to connect the device to IoT Hub. For more information, see [understand IoT Identity Service configurations](https://azure.github.io/iot-identity-service/configuration.html). | | connectionData |If connectionType = "string", add your IoT device's device or module connection string here. If connectionType = "AIS", set the connectionData to empty string (`"connectionData": ""`). | | manufacturer | Reported by the Device Update agent as part of the **DeviceInformation** interface. |
When installing Debian agent on an IoT Device with a Linux OS, modify the `/etc/
"adu", "do" ],
+ "iotHubProtocol": "mqtt",
+ "compatPropertyNames":"manufacturer,model,location,language",
"manufacturer": <Place your device info manufacturer here>, "model": <Place your device info model here>, "agents": [
When installing Debian agent on an IoT Device with a Linux OS, modify the `/etc/
"connectionData": <Place your Azure IoT device connection string here> }, "manufacturer": <Place your device property manufacturer here>,
- "model": <Place your device property model here>
+ "model": <Place your device property model here>,
+ "additionalDeviceProperties": {
+ "location": "USA",
+ "environment": "development"
+ }
} ] }
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
A combination of roles can be used to provide the right level of access. For exa
## Configuring access for Azure Device Update service principal in the IoT Hub
-Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
-
-Below actions will be blocked with upcoming release, if these permissions are not set:
+Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
+Deployment, device and update management and diagnostic actions will not be allowed if these permissions are not set. Operations that will be blocked will include:
* Create Deployment * Cancel Deployment
-* Retry Deployment
+* Retry Deployment
* Get Device
-1. Go to the **IoT Hub** connected to your Device Update Instance. Click **Access Control(IAM)**
-2. Click **+ Add** -> **Add role assignment**
-3. Under Role tab, select **IoT Hub Data Contributor**
-4. Click **Next**. For **Assign access to**, select **User, group, or service principal**. Click **+ Select Members**, search for '**Azure Device Update**'
-5. Click **Next** -> **Review + Assign**
-
-To validate that you've set permissions correctly:
-
-1. Go to the **IoT Hub** connected to your Device Update Instance. Click **Access Control(IAM)**
-2. Click **Check access**
-3. Select **User, group, or service principal** and search for '**Azure Device Update**'
-4. After clicking on '**Azure Device Update**', verify that the **IoT Hub Data Contributor** role is listed under **Role assignments**
+The permission can be set from IoT Hub Access Control (IAM). Refer to [Configure Access for Azure Device update service principal in linked IoT hub](configure-access-control-device-update.md#configure-access-for-azure-device-update-service-principal-in-linked-iot-hub)
## Authenticate to Device Update REST APIs
To add and remove a system-assigned managed identity in Azure portal:
3. Navigate to Identity in your IoT Hub portal 4. Under System-assigned tab, select On and click Save.
-To remove system-assigned managed identity from an Device Update for IoT hub account, select Off and click Save.
+To remove system-assigned managed identity from a Device Update for IoT hub account, select Off and click Save.
iot-hub-device-update Device Update Data Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-data-privacy.md
+
+ Title: Data privacy for Device Update for Azure IoT Hub | Microsoft Docs
+description: Understand how Device Update for IoT Hub protects data privacy.
++ Last updated : 09/12/2022++++
+# Device Update telemetry collection
+
+Device Update for IoT Hub is a REST API-based cloud service targeted at enterprise customers that enables secure, over-the-air updating of millions of devices via a partitioned Azure service.
+
+In order to maintain the quality and availability of the Device Update service, Microsoft collects certain telemetry from your Customer Data which may be stored and processed outside of your Azure region. Below is a list of the data points that Microsoft collects about the Device Update service.
+* Device Manufacturer, Model*
+* Device Interface Version*
+* DU Agent Version, DO Agent Version*
+* Update Namespace, Name, Version*
+* IoT Hub Device ID
+* DU Account ID, Instance ID
+* Import ErrorCode, ExtendedErrorCode
+* Deployment ResultCode, ExtendedResultCode
+* Log collection ResultCode, Extended ResultCode
+
+*For fields marked with asterisk, do not put any personal or sensitive data.
+
+Microsoft maintains no information and has no access to data that would allow correlation of these telemetry data points with an individualΓÇÖs identity. These system-generated data points are not accessible or exportable by tenant administrators. These data constitute factual actions conducted within the service and diagnostic data related to individual devices.
+
+For further information on Microsoft's privacy commitments, please read the "Enterprise and developer products" section of the Microsoft Privacy Statement.
iot-hub-device-update Device Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-deployments.md
# Update deployments
-A deployment is how updates are delivered to one or more devices. Deployments are always associated with a device group. A deployment can be initiated from the API or the UI.
+A deployment is how updates are delivered to one or more devices. Deployments are always associated with a device group. A deployment can be initiated from the API or the UI.
+ A device group can only have one active deployment associated with it at any given time. A deployment can be scheduled to begin in the future or start immediately. ## Dynamic deployments
Deployments in Device Update for IoT Hub are dynamic in nature. Dynamic deployme
Due to their dynamic nature, deployments remain active and in-progress until they are explicitly canceled. A deployment is considered inactive and superseded if a new deployment is created targeting the same device group. A deployment can be retried for devices that might fail. Once a deployment is canceled, it cannot be reactivated.
+## Deployment policies
+
+### Deployment scheduling
+
+Update deployments can be scheduled to start immediately or to start in the future at a particular time and date. This allows the user to efficiently plan device downtime so that it doesn't interfere with any other critical device workflows.
+
+### Automatic rollback policy
+
+After deploying an update, it is critical to ensure that:
+
+- Devices are in a clean state post-install that is, if an update partially fails, devices should be back to their last known good state.
+- Device ecosystem is consistent. That is, all devices in a group should be running the same version for easier manageability.
+- The rollback process is as hands-off as possible, with an option for the device operator to intervene manually only under rare, special circumstances.
+
+To enable device operators to meet these goals, update deployments can be configured with an automatic rollback policy from the cloud. This allows you to define a rollback trigger policy by setting thresholds in terms of percentage and minimum number of devices failed. Once the threshold has been met, all the devices in the group will be rolled back to the selected update version.
+ ## Next steps [Deploy an update](./deploy-update.md)
iot-hub-device-update Device Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-diagnostics.md
Title: Understand Device Update for Azure IoT Hub diagnostic features | Microsof
description: Understand what diagnostic features Device Update for IoT Hub has, including deployment error codes in UX and remote log collection. Previously updated : 1/26/2021 Last updated : 9/2/2022 # Device Update for IoT Hub diagnostics overview
-Device Update for IoT Hub has several features that help you to diagnose and troubleshoot device-side errors. With the release of the v0.8.0 agent, there are two diagnostic features available:
+Device Update for IoT Hub has several features that help you to diagnose and troubleshoot device-side errors. With the release of the v0.9.0 agent, there are three diagnostic features available:
-* **Deployment error codes** can be viewed directly in the latest preview version of the Device Update user interface
+* **Deployment error codes** can be viewed directly in the Device Update user interface
* **Remote log collection** enables the creation of log operations, which instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account
+* **Agent Check** runs validation checks on devices registered to your Device Update instance with the goal of diagnosing devices that are registered in the connected IoT Hub, but are not showing up in Device Update
+ ## Deployment error codes in UI When a device reports a deployment failure to the Device Update service, the Device Update user interface displays the device's reported `resultCode` and `extendedResultCode` in the user interface. Use the following steps to view these codes:
When a device reports a deployment failure to the Device Update service, the Dev
When more information from the device is necessary to diagnose and troubleshoot an error, you can use the log collection feature to instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account. You can start using this feature by following the instructions in [Remotely collect diagnostic logs from devices](device-update-log-collection.md).
-Device Update's remote log collection is a service-driven, operation-based feature. To take advantage of log collection, a device need only be able to implement the Diagnostics interface and configuration file, and be able to upload files to Azure Blob storage via SDK.
+Device Update's remote log collection is a service-driven, operation-based feature. To take advantage of log collection, a device need only be able to implement the [Diagnostics interface](device-update-plug-and-play.md#device-update-models) and configuration file, and be able to upload files to Azure Blob storage via SDK.
From a high level, the log collection feature works as follows:
From a high level, the log collection feature works as follows:
> [!NOTE] > Since the log operation is carried out in parallel by the targeted devices, it is possible that some targeted devices successfully uploaded logs, but the overall log operation is marked as failed. You can see which devices succeeded and which failed by viewing the log operation details through the user interface or APIs.
+## Agent Check
+
+When your device is registered in IoT Hub but is not appearing in your Device Update instance, you can use the Agent Check feature to run pre-made validation checks to help you diagnose the underlying issue. You can start using this feature by following these [Agent Check instructions](device-update-agent-check.md).
+
+From a high level, the agent check feature works as follows:
+
+- The user registers a device with IoT Hub. If the device reports a Model ID that matches those compatible with Device Update for IoT Hub, the user's connected Device Update instance will automatically register the device with Device Update.
+
+- In order for a device to be properly managed by Device Update, it must meet certain criteria that can be verified using Agent Check's pre-made validation checks. More information on these criteria can be found [here](device-update-agent-check.md).
+
+- If a device does not meet all of these criteria, it cannot be properly managed by Device Update and will not show up in the Device Update interface or API responses. Users can use Agent Check to find this device and attempt to identify which criteria is not being met by using Agent Check.
+
+- Once the user has identified which criteria is not being met, the user may correct the issue and the device should then properly appear in the Device Update interface.
+ ## Next steps
-Learn how to use Device Update's remote log collection feature: [Remotely collect diagnostic logs from devices using Device Update for IoT Hub](device-update-log-collection.md)
+Learn how to use Device Update's remote log collection and Agent Check features:
+
+ - [Remotely collect diagnostic logs from devices using Device Update for IoT Hub](device-update-log-collection.md)
+ - [Find and fix devices missing from Device Update for IoT Hub](device-update-agent-check.md)
iot-hub-device-update Device Update Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-groups.md
Below are the devices and the possible groups that can be created for them.
| Device3 | Group2 | | Device4 | DefaultGroup1-(deviceClassId) |
-## Invalid group
-A corresponding invalid group is created for every user-defined group. A device is added to the invalid group if it doesn't meet the compatibility requirements of the user-defined group. This grouping can be resolved by either re-tagging and regrouping the device under a new group, or modifying its compatibility properties through the agent configuration file.
-
-An invalid group only exists for diagnostic purposes. Updates cannot be deployed to invalid groups.
## Next steps
iot-hub-device-update Device Update Howto Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-howto-proxy-updates.md
This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example
1. Register *packages.microsoft.com* in an APT package repository:
- ```sh
- sudo apt-get update
+ ```sh
+ sudo apt-get update
- sudo apt install curl
+ sudo apt install curl
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ~/microsoft-prod.list
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ~/microsoft-prod.list
- sudo cp ~/microsoft-prod.list /etc/apt/sources.list.d/
+ sudo cp ~/microsoft-prod.list /etc/apt/sources.list.d/
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > ~/microsoft.gpg
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > ~/microsoft.gpg
- sudo cp ~/microsoft.gpg /etc/apt/trusted.gpg.d/
+ sudo cp ~/microsoft.gpg /etc/apt/trusted.gpg.d/
- sudo apt-get update
- ```
+ sudo apt-get update
+ ```
2. Install the **deviceupdate-agent** on the IoT device. Download the latest Device Update Debian file from *packages.microsoft.com*:
This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example
Alternatively, copy the downloaded Debian file to the test VM. If you're using PowerShell on your computer, run the following shell command:
- ```sh
- scp <path to the .deb file> tester@<your vm's ip address>:~
+ ```sh
+ scp <path to the .deb file> tester@<your vm's ip address>:~
``` Then remote into your VM and run the following shell command in the *home* folder: ```sh
- #go to home folder
- cd ~
- #install latest Device Update agent
- sudo apt-get install ./<debian file name from the previous step>
+ #go to home folder
+ cd ~
+ #install latest Device Update agent
+ sudo apt-get install ./<debian file name from the previous step>
```
-
+ 3. Go to Azure IoT Hub and copy the primary connection string for your IoT device's Device Update module. Replace any default value for the `connectionData` field with the primary connection string in the *du-config.json* file: ```sh
- sudo nano /etc/adu/du-config.json
+ sudo nano /etc/adu/du-config.json
```
-
+ > [!NOTE]
- > You can copy the primary connection string for the device instead, but we recommend that you use the string for the Device Update module. For information about setting up the module, see [Device Update Agent provisioning](device-update-agent-provisioning.md).
-
-4. Ensure that */etc/adu/du-diagnostics-config.json* contains the correct settings for log collection. For example:
+ > You can copy the primary connection string for the device instead, but we recommend that you use the string for the Device Update module. For information about setting up the module, see [Device Update Agent provisioning](device-update-agent-provisioning.md).
- ```sh
+4. Ensure that */etc/adu/du-diagnostics-config.json* contains the correct settings for log collection. For example:
+
+ ```json
{ "logComponents":[ {
This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example
For testing and demonstration purposes, we'll create the following mock components on the device: -- Three motors-- Two cameras-- "hostfs"-- "rootfs"
+* Three motors
+* Two cameras
+* "hostfs"
+* "rootfs"
> [!IMPORTANT] > The preceding component configuration is based on the implementation of an example component enumerator extension called *libcontoso-component-enumerator.so*. It also requires this mock component inventory data file: */usr/local/contoso-devices/components-inventory.json*. 1. Copy the [demo](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo) folder to your home directory on the test VM. Then, run the following command to copy required files to the right locations:
- ```markup
+ ```sh
`~/demo/tools/reset-demo-components.sh` ```
- The `reset-demo-components.sh` command takes the following steps on your behalf:
+ The `reset-demo-components.sh` command takes the following steps on your behalf:
- 1. It copies [components-inventory.json](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/demo-devices/contoso-devices/components-inventory.json) and adds it to the */usr/local/contoso-devices* folder.
+ * It copies [components-inventory.json](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/demo-devices/contoso-devices/components-inventory.json) and adds it to the */usr/local/contoso-devices* folder.
- 2. It copies the Contoso component enumerator extension (*libcontoso-component-enumerator.so*) from the [Assets folder](https://github.com/Azure/iot-hub-device-update/releases) and adds it to the */var/lib/adu/extensions/sources* folder.
-
- 3. It registers the extension:
+ * It copies the Contoso component enumerator extension (*libcontoso-component-enumerator.so*) from the [Assets folder](https://github.com/Azure/iot-hub-device-update/releases) and adds it to the */var/lib/adu/extensions/sources* folder.
- ```sh
- sudo /usr/bin/AducIotAgent -E /var/lib/adu/extensions/sources/libcontoso-component-enumerator.so
- ```
+ * It registers the extension:
+
+ ```sh
+ sudo /usr/bin/AducIotAgent -E /var/lib/adu/extensions/sources/libcontoso-component-enumerator.so
+ ```
2. View and record the current components' software version by using the following command to set up the VM to support proxy updates:
- ```markup
+ ```sh
~/demo/show-demo-components.sh ```
For testing and demonstration purposes, we'll create the following mock componen
If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT hub. Then start the following procedure.
-1. From the [latest Device Update release](https://github.com/Azure/iot-hub-device-update/releases), under **Assets**, download the import manifests and images for proxy updates.
+1. From the [latest Device Update release](https://github.com/Azure/iot-hub-device-update/releases), under **Assets**, download the import manifests and images for proxy updates.
2. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, select **Device Management** > **Updates**. 3. Select the **Updates** tab.
-4. Select **+ Import New Update**.
-5. Select **+ Select from storage container**, and then choose your storage account and container.
+4. Select **+ Import New Update**.
+5. Select **+ Select from storage container**, and then choose your storage account and container.
:::image type="content" source="media/understand-device-update/one-import.png" alt-text="Screenshot that shows the button for selecting to import from a storage container." lightbox="media/understand-device-update/one-import.png":::+ 6. Select **Upload** to add the files that you downloaded in step 1.
-7. Upload the parent import manifest, child import manifest, and payload files to your container.
+7. Upload the parent import manifest, child import manifest, and payload files to your container.
+
+ The following example shows sample files uploaded to update cameras connected to a smart vacuum cleaner device. It also includes a pre-installation script to turn off the cameras before the over-the-air update.
- The following example shows sample files uploaded to update cameras connected to a smart vacuum cleaner device. It also includes a pre-installation script to turn off the cameras before the over-the-air update.
-
- In the example, the parent import manifest is *contoso.Virtual-Vacuum-virtual-camera.1.4.importmanifest.json*. The child import manifest with details for updating the camera is *Contoso.Virtual-Vacuum.3.3.importmanifest.json*. Note that both manifest file names follow the required format and end with *.importmanifest.json*.
+ In the example, the parent import manifest is *contoso.Virtual-Vacuum-virtual-camera.1.4.importmanifest.json*. The child import manifest with details for updating the camera is *Contoso.Virtual-Vacuum.3.3.importmanifest.json*. Both manifest file names follow the required format and end with *.importmanifest.json*.
:::image type="content" source="media/understand-device-update/two-containers.png" alt-text="Screenshot that shows sample files uploaded to update cameras connected to a smart vacuum cleaner device." lightbox="media/understand-device-update/two-containers.png":::
If you haven't already done so, create a [Device Update account and instance](cr
:::image type="content" source="media/understand-device-update/three-confirm-import.png" alt-text="Screenshot that shows listed files and the button for importing an update." lightbox="media/understand-device-update/three-confirm-import.png"::: 10. The import process begins, and the screen changes to the **Import History** section. Select **Refresh** to view progress until the import process finishes. Depending on the size of the update, the import might finish in a few minutes or take longer.+ 11. When the **Status** column indicates that the import has succeeded, select the **Available Updates** tab. You should see your imported update in the list now. :::image type="content" source="media/understand-device-update/four-update-added.png" alt-text="Screenshot that shows the imported update added to the list." lightbox="media/understand-device-update/four-update-added.png":::
-[Learn more](import-update.md) about importing updates.
+For more information about the import process, see [Import an update to Device Update](import-update.md).
-## Create update group
+## View device groups
-1. Go to the Groups and Deployments tab at the top of the page.
- :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+Device Update uses groups to organize devices. Device Update automatically sorts devices into groups based on their assigned tags and compatibility properties. Each device belongs to only one group, but groups can have multiple subgroups to sort different device classes.
-2. Select the "Add group" button to create a new group.
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+1. Go to the **Groups and Deployments** tab at the top of the page.
-3. Select an IoT Hub tag and Device Class from the list and then select Create group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+1. View the list of groups and the update compliance chart. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
-5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-[Learn more](create-update-group.md) about adding tags and creating update groups
+1. You should see a device group that contains the simulated device you set up in this tutorial along with any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+For more information about tags and groups, see [Manage device groups](create-update-group.md).
## Deploy update
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once).
+
+ For more information about compliance, see [Device Update compliance](device-update-compliance.md).
-2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+1. Select the target group by clicking on the group name. You'll be directed to the group details under Group basics.
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
-3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+1. To initiate the deployment, go to the Current deployment tab. Select the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
-4. Schedule your deployment to start immediately or in the future, then select Create.
+1. Schedule your deployment to start immediately or in the future, then select Create.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
-5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+1. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
-6. View the compliance chart. You should see the update is now in progress.
+1. View the compliance chart. You should see the update is now in progress.
-7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+1. After your device is successfully updated, you should see your compliance chart and deployment details are updated to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
If you haven't already done so, create a [Device Update account and instance](cr
3. Select Refresh to view the latest status details.
-You've now completed a successful end-to-end proxy update by using Device Update for IoT Hub.
+You've now completed a successful end-to-end proxy update by using Device Update for IoT Hub.
## Clean up resources
-When you no longer need them, clean up your Device Update account, instance, IoT hub, and IoT device.
+When you no longer need them, clean up your Device Update account, instance, IoT hub, and IoT device.
## Next steps
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ reference image](device-update-raspberry-pi.md) (extensible via open source to build your own images for other architectures as needed)
-
-- [Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64](device-update-ubuntu-agent.md)
-
-- [Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)--- [Device Update for Azure IoT Hub tutorial using the Azure real-time operating system](device-update-azure-real-time-operating-system.md)
+> [!div class="nextstepaction"]
+> [Learn more about proxy updates and multi-component updating](device-update-proxy-updates.md)
iot-hub-device-update Device Update Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-limits.md
Title: Understand Device Update for IoT Hub limits | Microsoft Docs
description: Key limits for Device Update for IoT Hub. Previously updated : 9/9/2022 Last updated : 9/23/2022
This document provides an overview of the various limits that are imposed on the Device Update for IoT Hub resource and its associated operations. It also indicates whether the limits are adjustable by contacting Microsoft Support or not.
-## Preview limits
+## General Availability limits
-During preview, the Device Update for IoT Hub service is provided at no cost to customers. More restrictive limits are imposed during the service's preview offering. These limits are expected to change once the service is generally available.
+The following tables describe the limits for the Device Update for IoT Hub service for the Standard as well as the Free tier.
[!INCLUDE [device-update-for-iot-hub-limits](../../includes/device-update-for-iot-hub-limits.md)]
iot-hub-device-update Device Update Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-log-collection.md
Title: Device Update for Azure IoT Hub log collection | Microsoft Docs
description: Device Update for IoT Hub enables remote collection of diagnostic logs from connected IoT devices. Previously updated : 06/23/2022 Last updated : 10/26/2022
Learn how to initiate a Device Update for IoT Hub log operation and view collect
## Prerequisites
-# [Azure portal](#tab/portal)
- * [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). * An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub and implementing the Diagnostic Interface. * An [Azure Blob storage account](../storage/common/storage-account-create.md) under the same subscription as your Device Update for IoT Hub account.
Learn how to initiate a Device Update for IoT Hub log operation and view collect
> [!NOTE] > The remote log collection feature is currently compatible only with devices that implement the Diagnostic Interface and are able to upload files to Azure Blob storage. The reference agent implementation also expects the device to write log files to a user-specified file path on the device.
-# [Azure CLI](#tab/cli)
+# [Azure portal](#tab/portal)
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+Supported browsers:
-* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub and implementing the Diagnostic Interface.
+* [Microsoft Edge](https://www.microsoft.com/edge)
+* Google Chrome
-* An [Azure Blob storage account](../storage/common/storage-account-create.md) under the same subscription as your Device Update for IoT Hub account.
+# [Azure CLI](#tab/cli)
-* An Azure CLI environment:
+An Azure CLI environment:
- * Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+* Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
- [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
- * Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+* Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
- * Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
- * Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
- * When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
+ * Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ * Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ * When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
-> [!NOTE]
-> The remote log collection feature is currently compatible only with devices that implement the Diagnostic Interface and are able to upload files to Azure Blob storage. The reference agent implementation also expects the device to write log files to a user-specified file path on the device.
+>[!TIP]
+>The Azure CLI commands in this article use the backslash `\` character for line continuation so that the command arguments are easier to read. This syntax works in Bash environments. If you're running these commands in PowerShell, replace each backslash with a backtick `\``, or remove them entirely.
In order to use the remote log collection feature, you must first link an Azure
# [Azure CLI](#tab/cli)
-Use the [az iot device-update instance create](/cli/azure/iot/device-update/instance#az-iot-device-update-instance-create) command to configure diagnostics for your Device Update instance.
+Use the [az iot du instance create](/cli/azure/iot/du/instance#az-iot-du-instance-create) command to configure diagnostics for your Device Update instance.
>[!TIP]
->You can use the `az iot device-update instance create` command on an existing Device Update instances and it will configure the instance with the updated parameters.
+>You can use the `az iot du instance create` command on an existing Device Update instances and it will configure the instance with the updated parameters.
Replace the following placeholders with your own information:
Replace the following placeholders with your own information:
* *\<storage_id>*: The resource ID of the storage account where the diagnostics logs will be stored. You can retrieve the resource ID by using the [az storage show](/cli/azure/storage/account#az-storage-account-show) command and querying for the ID value: `az storage account show -n <storage_name> --query id`. ```azurecli-interactive
-az iot device-update instance update --account <account_name> --instance <instance_name> --set enableDiagnostics=true diagnosticStorageProperties.resourceId=<storage_id>
+az iot du instance update --account <account_name> --instance <instance_name> --set enableDiagnostics=true diagnosticStorageProperties.resourceId=<storage_id>
```
The relevant parameter "maxKilobytesToUploadPerLogPath" will apply to each logCo
Log operations are a service-driven action that you can instruct your IoT devices to perform through the Device Update service. For a more detailed explanation of how log operations function, see [Device update diagnostics](device-update-diagnostics.md).
+# [Azure portal](#tab/portal)
+ 1. Navigate to your IoT Hub and select the **Updates** tab under the **Device Management** section of the navigation pane. 2. Select the **Diagnostics** tab in the UI. If you don't see a Diagnostics tab, make sure you're using the newest version of the Device Update for IoT Hub user interface. If you see "Diagnostics must be enabled for this Device Update instance," make sure you've linked an Azure Blob storage account with your Device Update instance.
Log operations are a service-driven action that you can instruct your IoT device
8. In the log operation details, you can view the device-specific status and see the log location path. This path corresponds to the virtual directory path within your Azure Blob storage account where the diagnostic logs have been uploaded.
+# [Azure CLI](#tab/cli)
+
+Use the [az iot du device log collect](/cli/azure/iot/du/device/log#az-iot-du-device-log-collect) command to configure a diagnostics log collection operation.
+
+The `device log collect` command takes the following arguments:
+
+* `--account`: The Device Update account name.
+* `--instance`: The Device Update instance name.
+* `--log-collection-id`: A name for the log collection operation.
+* `--agent-id`: Key=value pairs that identify a target Device Update agent for this log collection operation. Use `deviceId=<device name>` if the agent has a device identity. Use `deviceId=<device name> moduleId=<module name>` if the agent has a module identity. You can use the `--agent-id` parameter multiple times to target multiple devices.
+
+For example:
+
+```azurecli
+az iot du device log collect \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --log-collection-id <log collection name} \
+ --agent-id deviceId=<device name> \
+ --agent-id deviceId=<device name> moduleId=<module name>
+```
+
+Use [az iot du device log show](/cli/azure/iot/du/device/log#az-iot-du-device-log-show) to view the details of a specific diagnostic log collection operation.
+
+```azurecli
+az iot du device log show \
+ --account <Device Update account name> \
+ --instance <Device Update instance name> \
+ --log-collection-id <log collection name>
+```
+++ ## View and export collected diagnostic logs 1. Once your log operation has succeeded, navigate to your Azure Blob storage account.
Log operations are a service-driven action that you can instruct your IoT device
4. Use the log location path from the log operation details to navigate to the correct directory containing the logs. By default, the remote log collection feature instructs targeted devices to upload diagnostic logs using the following directory path model: **Blob storage container/Target device ID/Log operation ID/On-device log path**
-5. If you haven't modified the diagnostic component of the DU agent, the device will respond to any log operation by attempting to upload two plaintext log files: the DU agent diagnostic log ("aduc.log"), and the DO agent diagnostic log ("do-agent.log"). You can learn more about which log files the DU reference agent collects by reading the [Device update diagnostics](device-update-diagnostics.md) concept page.
+5. If you haven't modified the diagnostic component of the Device Update agent, the device will respond to any log operation by attempting to upload two plaintext log files: the Device Update agent diagnostic log ("aduc.log"), and the DO agent diagnostic log ("do-agent.log"). You can learn more about which log files the Device Update reference agent collects by reading the [Device Update diagnostics](device-update-diagnostics.md) concept page.
6. You can view the log file's contents by selecting the file name, then selecting the menu element (ellipsis) and clicking **View/edit**. You can also download or delete the log file by selecting the respectively labeled options.
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md
An example update manifest with one inline step:
"isDeployable": true, "compatibility": [ {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
+ "manufacturer": "du-device",
+ "model": "e2e-test"
} ], "instructions": {
An example update manifest with two inline steps:
"isDeployable": true, "compatibility": [ {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
+ "manufacturer": "du-device",
+ "model": "e2e-test"
} ], "instructions": {
An example update manifest with one reference step:
"isDeployable": true, "compatibility": [ {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
+ "manufacturer": "du-device",
+ "model": "e2e-test"
} ], "instructions": {
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
# Device Update for IoT Hub and IoT Plug and Play
-Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model ID.
+Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces.
For more information: * Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md). * See how the [Device Update agent is implemented](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).
-## Device Update core interface
+## Device Update Models
-The **DeviceUpdateCore** interface is used to send update actions and metadata to devices and receive update status from devices. The DeviceUpdateCore interface is split into two object properties.
+Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md).
+
+Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
+
+Device Update has 2 PnP models defined that support DU features. The Device Update model, '**dtmi:azure:iot:deviceUpdateContractModel;2**', supports the core functionality and uses the device update core interface to send update actions and metadata to devices and receive update status from devices.
-The expected component name in your model is **"deviceUpdate"** when this interface is implemented. [Learn more about Azure IoT Plug and Play components.](../iot-develop/concepts-modeling-guide.md)
+The other supported model is **dtmi:azure:iot:deviceUpdateModel;2** which extends **deviceUpdateContractModel;2** and also uses other PnP interfaces that send device properties and information and enable diagnostic features. Learn more about the [Device Update Models and Interfaces Versions] (https://github.com/Azure/iot-plugandplay-models/tree/main/dtmi/azure/iot).
+
+The Device Update agent uses the **dtmi:azure:iot:deviceUpdateModel;2** which supports all the latest features in the [1.0.0 release](understand-device-update.md#flexible-features-for-updating-devices). This model supports the [V5 manifest version](import-concepts.md).
### Agent metadata
The **deviceProperties** field contains the manufacturer and model information f
|-|||--| |manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property is reported only at boot time. <br><br> Default value: 'Contoso'.| |model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property is reported only at boot time. <br><br> Default value: 'Video'|
-|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. The interface ID is required by Device Update service to manage and communicate with the agent. <br><br> Default value: 'dtmi:azure:iot:deviceUpdate;1' for devices using DU agent version 0.8.0.|
+|contractModelId|string|device to cloud|This property is used by the service to identify the base model version being used by the Device Update agent to manage and communicate with the agent.<br>Value: 'dtmi:azure:iot:deviceUpdateContractModel;2' for devices using DU agent version 1.0.0. <br>**Note:** Agents using the 'dtmi:azure:iot:deviceUpdateModel;2' must report the contractModelId as 'dtmi:azure:iot:deviceUpdateContractModel;2' as deviceUpdateModel;2 is extended from deviceUpdateModel;2|
|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).| |doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of the version reporting by setting the value to 0 (false). [How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).| |Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment.|
IoT Hub device twin example:
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "interfaceId": "dtmi:azure:iot:deviceUpdateModel;1",
+ "contractModelId": "dtmi:azure:iot:deviceUpdateContractModel;2",
"aduVer": "DU;agent/0.8.0-rc1-public-preview", "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051" },
The expected component name in your model is **deviceInformation** when this int
|totalStorage|Property|string|device to cloud|Total available storage on the device in kilobytes.|2048| |totalMemory|Property|string|device to cloud|Total available memory on the device in kilobytes.|256|
-## Model ID
+## Next Steps
-Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md).
+* [Understand Device Update agent configuration file](device-update-configuration-file.md)
-Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID with a value of **"dtmi:azure:iot:deviceUpdateModel;1"** as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
+* [Understand Device Update for Azure IoT Hub Agent](device-update-agent-overview.md)
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
This tutorial walks you through the steps to complete an end-to-end image-based
In this tutorial, you'll learn how to: > [!div class="checklist"]
+>
> * Download an image. > * Add a tag to your IoT device. > * Import an update.
-> * Create a device group.
> * Deploy an image update. > * Monitor the update deployment.
Use your favorite OS flashing tool to install the Device Update base image (adu-
Device Update for Azure IoT Hub software is subject to the following license terms:
- * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
+* [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
+* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you don't agree with the license terms, don't use the Device Update for IoT Hub agent.
Read the license terms prior to using the agent. Your installation and use const
Now, add the device to IoT Hub. From within IoT Hub, a connection string is generated for the device.
-1. From the Azure portal, start IoT Hub.
-1. Create a new device.
+1. From the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
1. On the left pane, select **Devices**. Then select **New**. 1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected. 1. Select **Save**. On the **Devices** page, the device you created should be in the list. 1. Get the device connection string by using one of two options:
- - Option 1: Use the Device Update agent with a module identity: On the same **Devices** page, select **Add Module Identity** at the top. Create a new Device Update module with the name **IoTHubDeviceUpdate**. Choose other options as they apply to your use case and then select **Save**. Select the newly created module. In the module view, select the **Copy** icon next to **Primary Connection String**.
- - Option 2: Use the Device Update agent with the device identity: In the device view, select the **Copy** icon next to **Primary Connection String**.
+ * Option 1: Use the Device Update agent with a module identity: On the same **Devices** page, select **Add Module Identity** at the top. Create a new Device Update module with the name **IoTHubDeviceUpdate**. Choose other options as they apply to your use case and then select **Save**. Select the newly created module. In the module view, select the **Copy** icon next to **Primary Connection String**.
+ * Option 2: Use the Device Update agent with the device identity: In the device view, select the **Copy** icon next to **Primary Connection String**.
1. Paste the copied characters somewhere for later use in the following steps: **This copied string is your device connection string**.
+## Add a tag to your device
+
+1. In the Azure portal, navigate to your IoT hub.
+1. On the left pane, under **Devices**, find your IoT device and go to the device twin or module twin.
+1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with the Device Update agent, make these changes on the device twin.
+1. Add a new Device Update tag value, as shown:
+
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
+ ## Prepare on-device configurations for Device Update for IoT Hub Two configuration files must be on the device so that Device Update for IoT Hub configures properly. The first file is the `du-config.json` file, which must exist at `/adu/du-config.json`. The second file is the `du-diagnostics-config.json` file, which must exist at `/adu/du-diagnostics-config.json`.
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
### Example du-config.json ```JSON
- {
- "schemaVersion": "1.0",
- "aduShellTrustedUsers": [
- "adu",
- "do"
- ],
+{
+ "schemaVersion": "1.0",
+ "aduShellTrustedUsers": [
+ "adu",
+ "do"
+ ],
+ "manufacturer": "fabrikam",
+ "model": "vacuum",
+ "agents": [
+ {
+ "name": "main",
+ "runas": "adu",
+ "connectionSource": {
+ "connectionType": "string",
+ "connectionData": "HostName=example-connection-string.azure-devices.net;DeviceId=example-device;SharedAccessKey=M5oK/rOP12aB5678YMWv5vFWHFGJFwE8YU6u0uTnrmU="
+ },
"manufacturer": "fabrikam",
- "model": "vacuum",
- "agents": [
- {
- "name": "main",
- "runas": "adu",
- "connectionSource": {
- "connectionType": "string",
- "connectionData": "HostName=example-connection-string.azure-devices.net;DeviceId=example-device;SharedAccessKey=M5oK/rOP12aB5678YMWv5vFWHFGJFwE8YU6u0uTnrmU="
- },
- "manufacturer": "fabrikam",
- "model": "vacuum"
- }
- ]
- }
+ "model": "vacuum"
+ }
+ ]
+}
``` ### Example du-diagnostics-config.json ```JSON
- {
- "logComponents":[
- {
- "componentName":"adu",
- "logPath":"/adu/logs/"
- },
- {
- "componentName":"do",
- "logPath":"/var/log/deliveryoptimization-agent/"
- }
- ],
- "maxKilobytesToUploadPerLogPath":50
- }
+{
+ "logComponents":[
+ {
+ "componentName":"adu",
+ "logPath":"/adu/logs/"
+ },
+ {
+ "componentName":"do",
+ "logPath":"/var/log/deliveryoptimization-agent/"
+ }
+ ],
+ "maxKilobytesToUploadPerLogPath":50
+}
``` ## Configure the Device Update agent on Raspberry Pi
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
1. Follow these instructions to add the configuration details: 1. First, SSH in to the machine by using the following command in the PowerShell window:
-
- ```shell
- ssh raspberrypi3 -l root
- ```
+
+ ```shell
+ ssh raspberrypi3 -l root
+ ```
1. Create or open the `du-config.json` file for editing by using:
-
- ```bash
- nano /adu/du-config.json
- ```
+
+ ```bash
+ nano /adu/du-config.json
+ ```
1. After you run the command, you should see an open editor with the file. If you've never created the file, it will be empty. Now copy the preceding example du-config.json contents, and substitute the configurations required for your device. Then replace the example connection string with the one for the device you created in the preceding steps.
- 1. After you finish your changes, select **Ctrl+X** to exit the editor. Then enter **y** to save the changes.
+ 1. After you finish your changes, select `Ctrl+X` to exit the editor. Then enter `y` to save the changes.
1. Now you need to create the `du-diagnostics-config.json` file by using similar commands. Start by creating or opening the `du-diagnostics-config.json` file for editing by using:
-
- ```bash
- nano /adu/du-diagnostics-config.json
- ```
- 1. Copy the preceding example du-diagnostics-config.json contents, and substitute any configurations that differ from the default build. The example du-diagnostics-config.json file represents the default log locations for Device Update for IoT Hub. You only need to change these if your implementation differs.
- 1. After you finish your changes, select **Ctrl+X** to exit the editor. Then enter **y** to save the changes.
+ ```bash
+ nano /adu/du-diagnostics-config.json
+ ```
+
+ 1. Copy the preceding example du-diagnostics-config.json contents, and substitute any configurations that differ from the default build. The example du-diagnostics-config.json file represents the default log locations for Device Update for IoT Hub. You only need to change these default values if your implementation differs.
+ 1. After you finish your changes, select `Ctrl+X` to exit the editor. Then enter `y` to save the changes.
1. Use the following command to show the files located in the `/adu/` directory. You should see both of your configuration files.du-diagnostics-config.json files for editing by using:
- ```bash
- ls -la /adu/
- ```
+ ```bash
+ ls -la /adu/
+ ```
1. Restart the Device Update system daemon to make sure that the configurations were applied. Use the following command within the terminal logged in to the `raspberrypi`:
-
- ```markdown
- systemctl start adu-agent
- ```
+
+ ```bash
+ systemctl start adu-agent
+ ```
1. Check that the agent is live by using the following command:
- ```markdown
- systemctl status adu-agent
- ```
+ ```bash
+ systemctl status adu-agent
+ ```
- You should see the status come back as alive and green.
+ You should see the status appear as alive and green.
## Connect the device in Device Update for IoT Hub
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
1. Select the link with your device name. 1. At the top of the page, select **Device Twin** if you're connecting directly to Device Update by using the IoT device identity. Otherwise, select the module you created and select its module twin. 1. Under the **reported** section of the **Device Twin** properties, look for the Linux kernel version.
-For a new device, which hasn't received an update from Device Update, the
-[DeviceManagement:DeviceInformation:1.swVersion](device-update-plug-and-play.md) value represents
-the firmware version running on the device. After an update has been applied to a device, Device Update
-uses the [AzureDeviceUpdateCore:ClientMetadata:4.installedUpdateId](device-update-plug-and-play.md) property
-value to represent the firmware version running on the device.
+
+ For a new device, which hasn't received an update from Device Update, the [DeviceManagement:DeviceInformation:1.swVersion](device-update-plug-and-play.md) value represents the firmware version running on the device. After an update has been applied to a device, Device Update uses the [AzureDeviceUpdateCore:ClientMetadata:4.installedUpdateId](device-update-plug-and-play.md) property value to represent the firmware version running on the device.
+ 1. The base and update image files have a version number in the file name. ```markdown adu-<image type>-image-<machine>-<version number>.<extension> ```
-Use that version number in the later "Import the update" section.
-
-## Add a tag to your device
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
-1. On the left pane, under **Devices**, find your IoT device and go to the device twin or module twin.
-1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with the Device Update agent, make these changes on the device twin.
-1. Add a new Device Update tag value, as shown:
-
- ```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
- ```
+ Use that version number in the later "Import the update" section.
## Import the update
Use that version number in the later "Import the update" section.
> [!NOTE] > We recommend that you use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you finish this step.
-
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Screenshot that shows Storage accounts and Containers." lightbox="media/import-update/storage-account-ppr.png"::: 1. In your container, select **Upload** and go to the files you downloaded in step 1. After you've selected all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page.
Use that version number in the later "Import the update" section.
:::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows job status." lightbox="media/import-update/update-ready-ppr.png":::
-[Learn more](import-update.md) about how to import updates.
+For more information about the import process, see [Import an update to Device Update](import-update.md).
-## Create an update group
+## View device groups
+
+Device Update uses groups to organize devices. Device Update automatically sorts devices into groups based on their assigned tags and compatibility properties. Each device belongs to only one group, but groups can have multiple subgroups to sort different device classes.
1. Go to the **Groups and Deployments** tab at the top of the page. :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Select **Add group** to create a new group.
-
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-
-1. Select an **IoT Hub** tag and **Device class** from the list. Then select **Create group**.
-
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-
-1. After the group is created, the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
+1. View the list of groups and the update compliance chart. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
:::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+1. You should see a device group that contains the simulated device you set up in this tutorial along with any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-[Learn more](create-update-group.md) about how to add tags and create update groups.
+For more information about tags and groups, see [Manage device groups](create-update-group.md).
## Deploy the update
-1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh once.
+
+ For more information about compliance, see [Device Update compliance](device-update-compliance.md).
+ 1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**. :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
Use that version number in the later "Import the update" section.
:::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows Deployment active." lightbox="media/deploy-update/deployment-active.png"::: 1. View the compliance chart to see that the update is now in progress.
-1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
+1. After your device is successfully updated, you see that your compliance chart and deployment details are updated to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows Update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
When no longer needed, clean up your Device Update account, instance, IoT hub, a
## Next steps > [!div class="nextstepaction"]
-> [Simulator reference agent](device-update-simulator.md)
+> [Update device packages with Device Update](device-update-ubuntu-agent.md)
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-simulator.md
This tutorial walks you through the steps to complete an end-to-end image-based
In this tutorial, you'll learn how to: > [!div class="checklist"]
+>
> * Download and install an image. > * Add a tag to your IoT device. > * Import an update.
-> * Create a device group.
> * Deploy an image update. > * Monitor the update deployment. ## Prerequisites
-If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md) and configure an IoT hub.
+* Create a [Device Update account and instance](create-device-update-account.md) configured with an IoT hub.
-Download the zip file named `Tutorial_Simulator.zip` from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) in the latest release, and unzip it.
+* Have an Ubuntu 18.04 device. This device can be either physical or a virtual machine.
+
+* Download the zip file named `Tutorial_Simulator.zip` from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) in the latest release, and unzip it.
+
+ If your test device is different than your development machine, download the zip file onto both.
+
+ You can use `wget` to download the zip file. Replace `<release_version>` with the latest release, for example `0.8.2`.
+
+ ```bash
+ wget https://github.com/Azure/iot-hub-device-update/releases/download/<release_version>/Tutorial_Simulator.zip
+ ```
## Add a device to Azure IoT Hub After the Device Update agent is running on an IoT device, you must add the device to IoT Hub. From within IoT Hub, a connection string is generated for a particular device.
-1. From the Azure portal, start the Device Update for IoT Hub.
+1. From the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
1. Create a new device. 1. On the left pane, go to **Devices**. Then select **New**. 1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected.
After the Device Update agent is running on an IoT device, you must add the devi
1. In the device view, select the **Copy** icon next to **Primary Connection String**. 1. Paste the copied characters somewhere for later use in the following steps:
- **This copied string is your device connection string**.
+ **This copied string is your device connection string**.
+
+## Add a tag to your device
+
+1. From the Azure portal, navigate to your IoT hub.
+1. From **Devices** on the left pane, find your IoT device and go to the device twin or module twin.
+1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with a Device Update agent, make these changes on the device twin.
+1. Add a new Device Update tag value, as shown:
+
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
## Install a Device Update agent to test it as a simulator
-1. Follow the instructions to [install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
+1. Follow the instructions to [install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true) on your test device.
+ > [!NOTE] > The Device Update agent doesn't depend on IoT Edge. But it does rely on the IoT Identity Service daemon that's installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub. > > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent won't be registered as an authorized component to establish a connection to IoT Hub.
-1. Then, install the Device Update agent .deb packages.
+
+1. Install the Device Update agent .deb packages.
```bash
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ sudo apt-get install deviceupdate-agent
``` 1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the following command:
After the Device Update agent is running on an IoT device, you must add the devi
sudo nano /etc/adu/du-config.json ```
-1. Set up the agent to run as a simulator. Run the following command on the IoT device so that the Device Update agent invokes the simulator handler to process a package update with APT ('microsoft/apt:1'):
+1. Set up the agent to run as a simulator. Run the following command on the IoT device so that the Device Update agent invokes the simulator handler to process a package update with APT (`microsoft/apt:1`).
```sh sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_simulator_1.so --update-type 'microsoft/apt:1' ```
-
+ To register and invoke the simulator handler, use the following format, filling in the placeholders:
-
+ `sudo /usr/bin/AducIotAgent --register--content-handler <full path to the handler file> --update-type <update type name>`
-1. You will need the file `sample-du-simulator-data.json` from the downloaded `Tutorial_Simulator.zip` in the prerequisites.
+1. Make a copy of the file `sample-du-simulator-data.json`, which is from the `Tutorial_Simulator.zip` file that you downloaded in the prerequisites, in the `tmp` folder.
- Open the file `sample-du-simulator-data.json` and copy contents to clipboard:
-
```sh
- nano sample-du-simulator-data.json
+ cp sample-du-simulator-data.json /tmp/du-simulator-data.json
```
-
- Select the contents of the file and press **Ctrl+C**. Press **Ctrl+X** to close the file and don't save changes.
-
- Run the following command to create and edit the `du-simulator-data.json` file in the tmp folder:
- ```sh
- sudo nano /tmp/du-simulator-data.json
- ```
- Press **Ctrl+V** to paste the contents into the editor. Select **Ctrl+X** to save the changes, and then **Y**.
-
- Change permissions:
+1. Change permissions for the new file.
+ ```sh sudo chown adu:adu /tmp/du-simulator-data.json sudo chmod 664 /tmp/du-simulator-data.json
After the Device Update agent is running on an IoT device, you must add the devi
sudo chmod 1777/tmp ```
-1. Restart the Device Update agent by running the following command:
+1. Restart the Device Update agent.
```bash sudo systemctl restart adu-agent
After the Device Update agent is running on an IoT device, you must add the devi
Device Update for Azure IoT Hub software is subject to the following license terms:
- * [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
+* [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
+* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you don't agree with the license terms, don't use the Device Update for IoT Hub agent. > [!NOTE] > After your testing with the simulator, run the following command to invoke the APT handler and [deploy over-the-air package updates](device-update-ubuntu-agent.md):
-```sh
-# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
-```
-
-## Add a tag to your device
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
-1. From **Devices** on the left pane, find your IoT device and go to the device twin or module twin.
-1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with a Device Update agent, make these changes on the device twin.
-1. Add a new Device Update tag value, as shown:
-
- ```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
- ```
+>
+>```sh
+>sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
+>```
## Import the update
-1. You will need the files `TutorialImportManifest_Sim.importmanifest.json` and `adu-update-image-raspberrypi3.swu` from the downloaded `Tutorial_Simulator.zip` in the prerequisites. The update file is reused from the Raspberry Pi tutorial. Because the update in this tutorial is simulated, the specific file content doesn't matter.
+In this section, you use the files `TutorialImportManifest_Sim.importmanifest.json` and `adu-update-image-raspberrypi3.swu` from the downloaded `Tutorial_Simulator.zip` in the prerequisites. The update file is reused from the Raspberry Pi tutorial. Because the update in this tutorial is simulated, the specific file content doesn't matter.
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**. 1. Select the **Updates** tab. 1. Select **+ Import New Update**.
Read the license terms prior to using the agent. Your installation and use const
1. In your container, select **Upload** and go to the files you downloaded in step 1. After you've selected all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page. :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Screenshot that shows selecting uploaded files." lightbox="media/import-update/import-select-ppr.png":::
-
+ _This screenshot shows the import step. File names might not match the ones used in the example._ 1. On the **Import update** page, review the files to be imported. Then select **Import update** to start the import process.
Read the license terms prior to using the agent. Your installation and use const
:::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows the job status." lightbox="media/import-update/update-ready-ppr.png":::
-[Learn more](import-update.md) about how to import updates.
+For more information about the import process, see [Import an update to Device Update for IoT Hub](import-update.md).
+
+## View device groups
-## Create an update group
+Device Update uses groups to organize devices. Device Update automatically sorts devices into groups based on their assigned tags and compatibility properties. Each device belongs to only one group, but groups can have multiple subgroups to sort different device classes.
1. Go to the **Groups and Deployments** tab at the top of the page. :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Select **Add group** to create a new group.
-
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-
-1. Select an **IoT Hub** tag and **Device Class** from the list. Then select **Create group**.
-
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-
-1. After the group is created, the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
+1. View the list of groups and the update compliance chart. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
:::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+1. You should see a device group that contains the simulated device you set up in this tutorial along with any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-[Learn more](create-update-group.md) about how to add tags and create update groups.
+For more information about tags and groups, see [Manage device groups](create-update-group.md).
## Deploy the update
-1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh the page.
+ 1. Select the target group by selecting the group name. You're directed to **Group details** under **Group basics**. :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
Read the license terms prior to using the agent. Your installation and use const
:::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows the deployment is active." lightbox="media/deploy-update/deployment-active.png"::: 1. View the compliance chart to see that the update is now in progress.
-1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
+1. After your device is successfully updated, you see that your compliance chart and deployment details are updated to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows Update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md
The tools and concepts in this tutorial still apply even if you plan to use a di
In this tutorial, you'll learn how to: > [!div class="checklist"]
+>
> * Download and install the Device Update agent and its dependencies. > * Add a tag to your device. > * Import an update.
-> * Create a device group.
> * Deploy a package update. > * Monitor the update deployment.
In this tutorial, you'll learn how to:
* You need the [connection string for an IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-provisioning-information). * If you used the [Simulator agent tutorial](device-update-simulator.md) for prior testing, run the following command to invoke the APT handler and deploy over-the-air package updates in this tutorial:
- ```sh
- # sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
- ```
+ ```sh
+ sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
+ ```
## Prepare a device
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
1. Fill in the available text boxes:
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the iotedge-vm-deploy template.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
-
- - **Subscription**: The active Azure subscription to deploy the virtual machine into.
- - **Resource group**: An existing or newly created resource group to contain the virtual machine and its associated resources.
- - **Region**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into. This value defaults to the location of the selected resource group.
- - **DNS Label Prefix**: A required value of your choosing that's used to prefix the hostname of the virtual machine.
- - **Admin Username**: A username, which is provided root privileges on deployment.
- - **Device Connection String**: A [device connection string](../iot-edge/how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT hub](../iot-hub/about-iot-hub.md).
- - **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed.
- - **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged because it will be set to Ubuntu 18.04-LTS already.
- - **Authentication Type**: Choose **sshPublicKey** or **password** based on your preference.
- - **Admin Password or Key**: The value of the SSH Public Key or the value of the password based on the choice of authentication type.
-
- After all the boxes are filled in, select the checkbox at the bottom of the page to accept the terms. Select **Purchase** to begin the deployment.
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the iotedge-vm-deploy template.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
+
+ * **Subscription**: The active Azure subscription to deploy the virtual machine into.
+ * **Resource group**: An existing or newly created resource group to contain the virtual machine and its associated resources.
+ * **Region**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into. This value defaults to the location of the selected resource group.
+ * **DNS Label Prefix**: A required value of your choosing that's used to prefix the hostname of the virtual machine.
+ * **Admin Username**: A username, which is provided root privileges on deployment.
+ * **Device Connection String**: A [device connection string](../iot-edge/how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT hub](../iot-hub/about-iot-hub.md).
+ * **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed.
+ * **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged because it will be set to Ubuntu 18.04-LTS already.
+ * **Authentication Type**: Choose **sshPublicKey** or **password** based on your preference.
+ * **Admin Password or Key**: The value of the SSH Public Key or the value of the password based on the choice of authentication type.
+
+ After all the boxes are filled in, select the checkbox at the bottom of the page to accept the terms. Select **Purchase** to begin the deployment.
1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the device package update agent. A virtual machine resource should have been deployed into the selected resource group. Note the machine name, which is in the format `vm-0000000000000`. Also note the associated **DNS name**, which is in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
- You can obtain the **DNS name** from the **Overview** section of the newly deployed virtual machine in the Azure portal.
+ You can obtain the **DNS name** from the **Overview** section of the newly deployed virtual machine in the Azure portal.
+
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the DNS name of the iotedge vm.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the DNS name of the iotedge vm.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
-
- > [!TIP]
- > To SSH into this VM after setup, use the associated **DNS name** with the following command:
+ > [!TIP]
+ > To SSH into this VM after setup, use the associated **DNS name** with the following command:
`ssh <adminUsername>@<DNS_Name>`.
- 1. Open the configuration details (See how to [set up configuration file here](device-update-configuration-file.md) with the command below. Set your connectionType as 'AIS' and connectionData as empty string.
+1. Open the configuration details (See how to [set up configuration file here](device-update-configuration-file.md) with the command below. Set your connectionType as 'AIS' and connectionData as empty string.
- ```markdown
+ ```bash
/etc/adu/du-config.json ```
- 5. Restart the Device Update agent by running the following command:
+1. Restart the Device Update agent.
- ```markdown
- sudo systemctl restart adu-agent
+ ```bash
+ sudo systemctl restart adu-agent
``` Device Update for Azure IoT Hub software packages are subject to the following license terms:
- * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
+* [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
+* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-Read the license terms before you use a package. Your installation and use of a package constitutes your acceptance of these terms. If you don't agree with the license terms, don't use that package.
+Read the license terms before you use a package. Your installation and use of a package constitutes your acceptance of these terms. If you don't agree with the license terms, don't use that package.
### Manually prepare a device
Similar to the steps automated by the [cloud-init script](https://github.com/Azu
1. Install the Device Update agent .deb packages: ```bash
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ sudo apt-get install deviceupdate-agent
```
-1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the following command:
+1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file.
- ```markdown
+ ```bash
/etc/adu/du-config.json ```
-5. Restart the Device Update agent by running the following command:
+1. Restart the Device Update agent.
- ```markdown
- sudo systemctl restart adu-agent
+ ```bash
+ sudo systemctl restart adu-agent
``` Device Update for Azure IoT Hub software packages are subject to the following license terms:
- * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
+* [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
+* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
Read the license terms before you use a package. Your installation and use of a package constitutes your acceptance of these terms. If you don't agree with the license terms, don't use that package.
Read the license terms before you use a package. Your installation and use of a
1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using Device identity with Device Update agent, make these changes on the device twin. 1. Add a new Device Update tag value, as shown:
- ```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- },
- ```
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ },
+ ```
## Import the update
Read the license terms before you use a package. Your installation and use of a
1. In your container, select **Upload** and go to the files you downloaded in step 1. After you select all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page. :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Screenshot that shows selecting uploaded files." lightbox="media/import-update/import-select-ppr.png":::
-
+ _This screenshot shows the import step. File names might not match the ones used in the example._ 1. On the **Import update** page, review the files to be imported. Then select **Import update** to start the import process.
Read the license terms before you use a package. Your installation and use of a
:::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows the job status." lightbox="media/import-update/update-ready-ppr.png":::
-[Learn more](import-update.md) about how to import updates.
+For more information about the import process, see [Import an update to Device Update](import-update.md).
+
+## View device groups
-## Create an update group
+Device Update uses groups to organize devices. Device Update automatically sorts devices into groups based on their assigned tags and compatibility properties. Each device belongs to only one group, but groups can have multiple subgroups to sort different device classes.
1. Go to the **Groups and Deployments** tab at the top of the page. :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Select the **Add group** button to create a new group.
-
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-
-1. Select an **IoT Hub** tag and **Device Class** from the list. Then select **Create group**.
-
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-
-1. After the group is created, you see that the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
+1. View the list of groups and the update compliance chart. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
:::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+1. You should see a device group that contains the simulated device you set up in this tutorial along with any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-[Learn more](create-update-group.md) about how to add tags and create update groups.
+For more information about tags and groups, see [Manage device groups](create-update-group.md).
## Deploy the update
-1. After the group is created, you should see a new update available for your device group with a link to the update under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. After the group is created, you should see a new update available for your device group with a link to the update under **Best update**. You might need to refresh once.
+
+ For more information about compliance, see [Device Update compliance](device-update-compliance.md).
1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
Read the license terms before you use a package. Your installation and use of a
> [!TIP] > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows creating a deployment." lightbox="media/deploy-update/create-deployment.png":::
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows creating a deployment." lightbox="media/deploy-update/create-deployment.png":::
1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
Read the license terms before you use a package. Your installation and use of a
1. View the compliance chart to see that the update is now in progress.
-1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
+1. After your device is successfully updated, you see that your compliance chart and deployment details are updated to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows the update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
When no longer needed, clean up your device update account, instance, and IoT hu
## Next steps
-Use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Image Update: Getting started with Raspberry Pi 3 B+ reference Yocto image](device-update-raspberry-pi.md) extensible via open source to build your own images for other architecture as needed.-- [Proxy Update: Getting started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md).-- [Getting started using Ubuntu (18.04 x64) simulator reference agent](device-update-simulator.md).-- [Device Update for Azure IoT Hub tutorial for Azure real-time operating system](device-update-azure-real-time-operating-system.md).
+> [!div class="nextstepaction"]
+> [Update device components or connected sensors with Device Update](device-update-howto-proxy-updates.md)
iot-hub-device-update Import Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-concepts.md
For example:
"isDeployable": false, "compatibility": [ {
- "deviceManufacturer": "Contoso",
- "deviceModel": "Toaster"
+ "manufacturer": "Contoso",
+ "model": "Toaster"
} ], "instructions": {
For example:
} ], "createdDateTime": "2022-01-19T06:23:52.6996916Z",
- "manifestVersion": "4.0"
+ "manifestVersion": "5.0"
} ```
Here's an example of an update that can only be deployed to a device that report
{ "compatibility": [ {
- "deviceManufacturer": "Contoso",
- "deviceModel": "Toaster"
+ "manufacturer": "Contoso",
+ "model": "Toaster"
} ] }
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
Title: Importing updates into Device Update for IoT Hub - schema and other information | Microsoft Docs
-description: Schema and other related information (including objects) that is used when importing updates into Device Update for IoT Hub.
+ Title: Importing updates into Device Update for IoT Hub - import manifest schema | Microsoft Docs
+description: Schema used to create the import manifest required to import updates into Device Update for IoT Hub.
Previously updated : 06/27/2022 Last updated : 09/9/2022 # Importing updates into Device Update for IoT Hub: schema and other information
-If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [how-to guide](import-update.md) first. If you're interested in the details of import manifest schema, or information about API permissions, see below.
+If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [how-to guide](import-update.md) first. If you're interested in the details of the import manifest schema itself, see below.
-The import manifest JSON schema is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-import-manifest-4.0.json).
+The import manifest JSON schema is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-import-manifest-5.0.json).
## Schema
For example:
{ "compatibility": [ {
- "deviceManufacturer": "Contoso",
- "deviceModel": "Toaster"
+ "manufacturer": "Contoso",
+ "model": "Toaster"
} ] }
A *file* object is an update payload file, for example, binary, firmware, script
|**filename**|`string`|Update payload file name.<br><br>Maximum length: 255 characters|Yes| |**sizeInBytes**|`number`|File size in number of bytes.<br><br>Maximum size: 2147483648 bytes|Yes| |**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. See below for details on how to calculate the hash. |Yes|
+|**relatedFiles**|`relatedFile[0-4]`|Collection of related files to one or more of your primary payload files. |No|
+|**downloadHandler**|`downloadHandler`|Specifies how to process any related files. |Yes only if using relatedFiles|
Additional properties aren't allowed.
For example:
} } ```
+## relatedFiles object
+
+Collection of related files to one or more of your primary payload files.
+
+|Property|Type|Description|Required|
+|||||
+|**filename**|`string`|List of related files associated with a primary payload file.|Yes|
+|**sizeInBytes**|`number`|File size in number of bytes.<br><br>Maximum size: 2147483648 bytes|Yes|
+|**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. See below for details on how to calculate the hash. |Yes|
+|**properties**|`relatedFilesProperties` `[0-5]`|Limit of 5 key-value pairs, where key is limited to 64 ASCII characters and value is JObject (with up to 256 ASCII characters). |No|
+
+Additional properties are allowed.
+
+For example:
+
+```json
+"relatedFiles": [
+ {
+ "filename": "in1_in2_deltaupdate.dat",
+ "sizeInBytes": 102910752,
+ "hashes": {
+ "sha256": "2MIldV8LkdKenjJasgTHuYi+apgtNQ9FeL2xsV3ikHY="
+ },
+ "properties": {
+ "microsoft.sourceFileHashAlgorithm": "sha256",
+ "microsoft.sourceFileHash": "YmFYwnEUddq2nZsBAn5v7gCRKdHx+TUntMz5tLwU+24="
+ }
+ }
+],
+```
+## downloadHandler object
+
+Specifies how to process any related files.
+
+|Property|Type|Description|Required|
+|||||
+|**id**|`string`|Identifier for downloadHandler. Limit of 64 ASCII characters.|Yes|
+
+Additional properties are not allowed.
+
+For example:
+
+```json
+"downloadHandler": {
+ "id": "microsoft/delta:1"
+}
+```
## Next steps
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-update.md
Title: Add an update to Device Update for IoT Hub | Microsoft Docs
description: How-To guide to add an update into Device Update for IoT Hub. Previously updated : 1/31/2022 Last updated : 10/31/2022
Learn how to obtain a new update and import it into Device Update for IoT Hub. I
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* Access to [an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub.
-* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS, and Windows installs)
-* Supported browsers:
- * [Microsoft Edge](https://www.microsoft.com/edge)
- * Google Chrome
+* Follow the steps in [Prepare an update to import into Device Update for IoT Hub](create-update.md) to create the import manifest for your update files.
+
+# [Azure portal](#tab/portal)
+
+Supported browsers:
+
+* [Microsoft Edge](https://www.microsoft.com/edge)
+* Google Chrome
+
+# [Azure CLI](#tab/cli)
+
+An Azure CLI environment:
+
+* Use the Bash environment in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ [![Launch Cloud Shell in a new window](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+* Or, if you prefer to run CLI reference commands locally, [install the Azure CLI](/cli/azure/install-azure-cli)
+
+ 1. Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+ 2. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+ 3. When prompted, install Azure CLI extensions on first use. The commands in this article use the **azure-iot** extension. Run `az extension update --name azure-iot` to make sure you're using the latest version of the extension.
+
+>[!TIP]
+>The Azure CLI commands in this article use the backslash `\` character for line continuation so that the command arguments are easier to read. This syntax works in Bash environments. If you're running these commands in PowerShell, replace each backslash with a backtick `\``, or remove them entirely.
++ ## Import an update
-> [!NOTE]
-> The following instructions show how to import an update via the Azure portal UI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-via-apis-instead) to import an update instead.
+This section shows how to import an update using either the Azure portal or the Azure CLI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-using-apis-instead) to import an update instead.
+
+To import an update, you first upload the update files and import manifest into an Azure Storage container. Then, you import the update from Azure Storage into Device Update for IoT Hub.
+
+# [Azure portal](#tab/portal)
-1. Log in to the [Azure portal](https://portal.azure.com) and navigate to your IoT Hub with Device Update.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT Hub with Device Update.
-2. On the left-hand side of the page, select `Updates` under `Device Management`.
+2. On the left-hand side of the page, select **Updates** under **Device Management**.
:::image type="content" source="media/import-update/import-updates-3-ppr.png" alt-text="Import Updates" lightbox="media/import-update/import-updates-3-ppr.png":::
-3. Select the `Updates` tab from the list of tabs across the top of the screen.
+3. Select the **Updates** tab from the list of tabs across the top of the screen.
:::image type="content" source="media/import-update/updates-tab-ppr.png" alt-text="Updates" lightbox="media/import-update/updates-tab-ppr.png":::
-4. Select `+ Import a new update` below the `Available Updates` header.
+4. Select **+ Import a new update** below the **Available Updates** header.
:::image type="content" source="media/import-update/import-new-update-2-ppr.png" alt-text="Import New Update" lightbox="media/import-update/import-new-update-2-ppr.png":::
-5. Select `+ Select from storage container`. The Storage accounts UI is shown. Select an existing account, or create an account using `+ Storage account`. This account is used for a container to stage your updates for import.
+5. Select **+ Select from storage container**. The Storage accounts UI is shown. Select an existing account, or create an account using **+ Storage account**. This account is used for a container to stage your updates for import.
:::image type="content" source="media/import-update/select-update-files-ppr.png" alt-text="Select Update Files" lightbox="media/import-update/select-update-files-ppr.png":::
-6. Once you've selected a Storage account, the Containers UI is shown. Select an existing container, or create a container using `+ Container`. This container is used to stage your update files for importing _Recommendation: use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you complete this step._
+6. Once you've selected a Storage account, the Containers UI is shown. Select an existing container, or create a container using **+ Container**. This container is used to stage your update files for importing
+
+ We recommend that you use a new container each time you import an update. Always using new containers helps you to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you complete this step.
:::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
-7. In your container, select `Upload`. The Upload UI is shown.
+7. In your container, select **Upload**. The Upload UI is shown.
:::image type="content" source="media/import-update/container-ppr.png" alt-text="Select Container" lightbox="media/import-update/container-ppr.png":::
-8. Select the folder icon on the right side of the `Files` section under the `Upload blob` header. Use the file picker to navigate to the location of your update files and import manifest, select all of the files, then select `Open`. _You can hold the Shift key and click to multi-select files._
+8. Select the folder icon on the right side of the **Files** section under the **Upload blob** header. Use the file picker to navigate to the location of your update files and import manifest, select all of the files, then select **Open**. _You can hold the Shift key and click to multi-select files._
:::image type="content" source="media/import-update/container-picker-ppr.png" alt-text="Publish Update" lightbox="media/import-update/container-picker-ppr.png":::
-9. When you've selected all your update files, select `Upload`.
+9. When you've selected all your update files, select **Upload**.
:::image type="content" source="media/import-update/container-upload-ppr.png" alt-text="Container Upload" lightbox="media/import-update/container-picker-ppr.png":::
-10. Select the uploaded files to designate them to be imported . Then click the `Select` button to return to the `Import update` page.
+10. Select the uploaded files to designate them to be imported. Then select the **Select** button to return to the **Import update** page.
:::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
-11. On the Import update page, review the files to be imported. Then select `Import update` to start the import process. _To resolve any errors, see the [Proxy Update Troubleshooting](device-update-proxy-update-troubleshooting.md) page ._
+11. On the Import update page, review the files to be imported. Then select **Import update** to start the import process. To resolve any errors, see [Proxy update troubleshooting](device-update-proxy-update-troubleshooting.md).
:::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
-12. The import process begins, and the screen switches to the `Import History` section. Select `Refresh` to view progress until the import process completes (depending on the size of the update, the process might complete in a few minutes but could take longer).
+12. The import process begins, and the screen switches to the **Import History** section. Select **Refresh** to view progress until the import process completes (depending on the size of the update, the process might complete in a few minutes but could take longer).
:::image type="content" source="media/import-update/update-publishing-sequence-2-ppr.png" alt-text="Update Import Sequencing" lightbox="media/import-update/update-publishing-sequence-2-ppr.png":::
-13. When the `Status` column indicates that the import has succeeded, select the `Available Updates` header. You should see your imported update in the list now.
+13. When the **Status** column indicates that the import has succeeded, select the **Available Updates** header. You should see your imported update in the list now.
:::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
-## If you're importing via APIs instead
+# [Azure CLI](#tab/cli)
+
+The [az iot du update stage](/cli/azure/iot/du/update#az-iot-du-update-stage) command handles the prerequisite steps of importing an update, including uploading the update files into a target storage container. An optional flag also lets this command automatically import the files after they're prepared. Otherwise, the [az iot du update import](/cli/azure/iot/du/update#az-iot-du-update-import) command completes the process.
+
+The `stage` command takes the following arguments:
+
+* `--account`: The Device Update account name.
+* `--instance`: The Device Update instance name.
+* `--manifest-path`: The file path to the import manifest that should be staged.
+* `--storage-account`: The name of the storage account to stage the update.
+* `--storage-container`: The name of the container within the selected storage account to stage the update.
+* `--overwrite`: Optional flag that indicates whether to overwrite existing blobs in the storage container if there's a conflict.
+* `--then-import`: Optional flag that indicates whether the update should be imported to Device Update after it's staged.
+
+```azurecli
+az iot du update stage \
+ --account <Replace with your Device Update account name> \
+ --instance <Replace with your Device Update instance name> \
+ --manifest-path <Replace with the full path to your import manifest> \
+ --storage-account <Replace with your Storage account name> \
+ --storage-container <Replace with your container name> \
+ --overwrite --then-import
+```
+
+For example:
+
+```azurecli
+az iot du update stage \
+ --account deviceUpdate001 \
+ --instance myInstance \
+ --manifest-path /my/apt/manifest/file.importmanifest.json \
+ --storage-account deviceUpdateStorage \
+ --storage-container deviceUpdateDemo \
+ --overwrite --then-import
+```
+
+If you have multiple import manifests, you can include them all in a single command. For example:
+
+```azurecli
+az iot du update stage \
+ --account deviceUpdate001 \
+ --instance myInstance \
+ --manifest-path /my/apt/manifest/parent.importmanifest.json \
+ --manifest-path /my/apt/manifest/child1.importmanifest.json \
+ --manifest-path /my/apt/manifest/child2.importmanifest.json \
+ --storage-account deviceUpdateStorage \
+ --storage-container deviceUpdateDemo \
+ --overwrite --then-import
+```
+
+If you don't use the `--then-import` flag, the output of the `stage` command includes a prompt to run [az iot du update import](/cli/azure/iot/du/update#az-iot-du-update-import), including pre-populated arguments.
+
+Use [az iot du update list](/cli/azure/iot/du/update#az-iot-du-update-list) to verify that your update or updates were successfully imported.
+
+```azurecli
+az iot du update list \
+ --account <Replace with your Device Update account name> \
+ --instance <Replace with your Device Update instance name> \
+ -o table
+```
+++
+## If you're importing using APIs instead
+
+You can also import an update programmatically by:
-In addition to importing via the Azure portal, you can also import an update programmatically by:
* Using `Azure SDK` for [.NET](/dotnet/api/azure.iot.deviceupdate), [Java](/java/api/com.azure.iot.deviceupdate), [JavaScript](/javascript/api/@azure/iot-device-update) or [Python](/python/api/azure-mgmt-deviceupdate/azure.mgmt.deviceupdate) * Using [Import Update REST API](/rest/api/deviceupdate/2020-09-01/updates) * Using [sample PowerShell modules](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets)
+ * Requires [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS, and Windows installs)
> [!NOTE] > Refer to [Device update user roles and access](device-update-control-access.md) for required API permission.
-Update files and import manifest must be uploaded to an Azure Storage Blob container for staging. To import the staged files, provide the blob URL, or shared access signature (SAS) for private blobs, to the Device Update API. If using a SAS, be sure to provide a three hour or greater expiration window.
+Update files and import manifest must be uploaded to an Azure Storage Blob container for staging. To import the staged files, provide the blob URL, or shared access signature (SAS) for private blobs, to the Device Update API. If using a SAS, be sure to provide an expiration window of three hours or more
> [!TIP] > To upload large update files to Azure Storage Blob container, you may use one of the following for better performance:
-> - [AzCopy](../storage/common/storage-use-azcopy-v10.md)
-> - [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer)
+>
+> * [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+> * [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer)
## Next Steps * [Create Groups](create-update-group.md)
-* [Learn about import concepts](import-concepts.md)
+* [Learn about import concepts](import-concepts.md)
iot-hub-device-update Migration Pp To Ppr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/migration-pp-to-ppr.md
- Title: Migrating to the latest Device Update for Azure IoT Hub release | Microsoft Docs
-description: Understand how to migrate to latest Device Update for Azure IoT Hub release
-- Previously updated : 1/14/2022----
-# Migrate devices and groups from Public Preview to Public Preview Refresh
-
-As the Device Update for IoT Hub service releases new versions, you'll want to update your devices for the latest features and security improvements. This article provides information about how to migrate from the [Public Preview release](/previous-versions/azure/iot-hub-device-update/understand-device-update) to the current, [Public Preview Refresh (PPR) release](understand-device-update.md). This article also explains the group and UX behavior across these releases. If you do not have devices, groups, and deployments that use the Public Preview release, you can ignore this page.
-
-To migrate successfully, you will have to upgrade the DU agent running on your devices. You will also have to create new device groups to deploy and manage updates. Note that as there are major changes with the PPR release, we recommend that you follow the instructions closely to avoid errors.
-
-## Update the device update agent
-
-For the Public Preview Refresh release, the Device Update agent needs to be updated manually as described below. Updating the agent through a Device Update deployment is not supported due to major changes across the Public Preview and PPR release.
-
-1. To view devices using older agents (versions 0.7.0/0.6.0) and groups created before 02/03/2022, navigate to the public preview portal, which can be accessed through the banner.
-
- :::image type="content" source="media/migration/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/migration/switch-banner.png":::
-
-2. Create a new IoT/IoT Edge device on the Azure portal. Copy the primary connection string for the device from the device view for later. For more details, refer the [Add Device to IoT Hub](device-update-simulator.md#add-a-device-to-azure-iot-hub) section.
-
-3. Then, SSH into your device and remove any old Device Update agent.
- ```bash
- sudo apt remove deviceupdate-agent
- sudo apt remove adu-agent
- ```
-
-4. Remove the old configuration file
- ```bash
- sudo rm -f /etc/adu/adu-conf.txt
- ```
-
-5. Install the new agent
- ```bash
- sudo apt-get install deviceupdate-agent
- ```
- Alternatively, you can get the .deb asset from [GitHub](https://github.com/Azure/iot-hub-device-update) and install the agent
-
- ```bash
- sudo apt install <file>.deb
- ```
-
- Trying to upgrade the Device Update agent without removing the old agent and configuration files will result in the error shown below.
-
- :::image type="content" source="media/migration/update-error.png" alt-text="Screenshot of update error." lightbox="media/migration/update-error.png":::
-
-
-6. Enter your IoT device's device (or module, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
-
- ```markdown
- sudo nano /etc/adu/du-config.json
- ```
- 7. Add your model, manufacturer, agent name, connection type and other details in the configuration file
-
- 8. Delete the old IoT/IoT Edge device from the public preview portal.
-
-> [!NOTE]
-> Attempting to update the agent through a DU deployment will lead to the device no longer being manageable by Device Update. The device will have to be re-provisioned to be managed from Device Update.
-
-## Migrate groups to Public Preview Refresh
-
-1. If your devices are using Device Update agent versions 0.6.0 or 0.7.0, upgrade to the latest agent version 0.8.0 following the steps above.
-
-2. Delete the existing groups in the public preview portal by navigating through the banner.
-
-3. Add group tag to the device twin for the updated devices. For more details, refer the [Add a tag to your device](device-update-simulator.md#add-a-device-to-azure-iot-hub) section.
-
-4. Recreate the groups in the PPR portal by going to ΓÇÿAdd GroupsΓÇÖ and selecting the corresponding groups tag from the drop-down list.
-
-5. Note that a group with the same name cannot be created in the PPR portal if the group in the public preview portal is not deleted.
-
-## Group and deployment behavior across releases
--- Groups created in the Public Preview Refresh release portal will only allow addition of devices with the latest Device Update Agent (0.8.0). Devices with older agents (0.7.0/0.6.0) cannot be added to these groups.
-
-- Any new devices using the latest agent will automatically be added to a Default DeviceClass Group in the ΓÇÿGroups and DeploymentsΓÇÖ tab. If a group tag is added to the device properties, then the device will be added to that group if a group for that tag exists.
-
-- For the device using the latest agent, if a group tag is added to the device properties but the corresponding group is not yet created the device will not be visible in the ΓÇÿGroups and DeploymentsΓÇÖ tab.
-
-- Devices using the older agents will show up as ungrouped in the old portal if the group tag is not added.-
-## Next steps
-[Understand Device Update agent configuration file](device-update-configuration-file.md)
-
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)--- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Migration Public Preview Refresh To Ga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/migration-public-preview-refresh-to-ga.md
+
+ Title: Migrating to the latest Device Update for IoT Hub release | Microsoft Docs
+description: Understand how to migrate to latest Device Update for IoT Hub release
++ Last updated : 9/15/2022++++
+# Migrate devices and groups to latest Device Update for IoT Hub release
+
+As the Device Update for IoT Hub service releases new versions, you'll want to update your devices for the latest features and security improvements. This article provides information about how to migrate from the [Public Preview Refresh(PPR) release] to the current, [GA release](understand-device-update.md). This article also explains the group and UX behavior across these releases. If you do not have devices, groups, and deployments that use the Public Preview Refresh release, you can ignore this page.
+
+To migrate successfully, you will have to upgrade the DU agent running on your devices. Note that as there are major changes with the GA release, we recommend that you follow the instructions closely to avoid errors.
+
+> [!NOTE]
+> All PPR device groups created will be automatically changed to GA groups. The groups and devices will be available after migration. The deployment history will not carry over to the the updated GA groups.
+
+## Update the Device Update agent
+
+For the GA release, the Device Update agent can be updated manually or using the Device Update Service using apt manifest or image updates. If you are using image updates, you can include the GA Device Update agent in the your update.
+
+### Manual DU Agent Upgrade
+
+1. Before you update your device, the device attributes will include the PPR PnP model details. The **Contract Model Name** will show **Device Update Model V1** and **Contract Model ID** will show **dtmi:azure:iot:deviceUpdateContractModel;1**.
+
+3. SSH into your device and update the Device Update agent.
+ ```bash
+ sudo apt install deviceupdate-agent
+ sudo systemctl restart deviceupdate-agent
+ sudo systemctl status deviceupdate-agent
+ ```
+2. Confirm that the DU agent is running correctly. Look for 'HealthCheck passed'
+ ```bash
+ sudo -u adu /usr/bin/AducIotAgent -h
+ ```
+3. See the updated device in the Device Update portal. The device attributes will now show the updated PnP model details.The **Contract Model Name** will show **Device Update Model V2** and **Contract Model ID** will show **dtmi:azure:iot:deviceUpdateContractModel;2**.
++
+### OTA DU Agent Upgrade though APT manifest
+
+1. Before you update your devices, the device attributes will include the PPR PnP model details. The **Contract Model Name** will show **Device Update Model V1** and **Contract Model ID** will show **dtmi:azure:iot:deviceUpdateContractModel;1**.
+
+2. Add device update agent upgrade as the last step in your update. The import manifest version must be **"4.0"** to ensure it is targeted to the correct devices. See below a sample import manifest and APT manifest:
+
+ **Example Import Manifest**
+ ```json
+ {
+ "manifestVersion": "4",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Sensor",
+ "version": "1.0"
+ },
+ "compatibility": [
+ {
+ "manufacturer": "Contoso",
+ "model": "Sensor"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "handler": "microsoft/apt:1",
+ "handlerProperties": {
+ "installedCriteria": "1.0"
+ },
+ "files": [
+ "fileId0"
+ ]
+ }
+ ]
+ },
+ "files": {
+ "fileId0": {
+ "filename": "sample-upgrade-apt-manifest.json",
+ "sizeInBytes": 210,
+ "hashes": {
+ "sha256": "mcB5SexMU4JOOzqmlJqKbue9qMskWY3EI/iVjJxCtAs="
+ }
+ }
+ },
+ "createdDateTime": "2022-08-20T18:32:01.8404544Z"
+ }
+ ```
+
+ **Example APT manifest**
+
+ ```json
+ {
+ "name": "Sample DU agent upgrade update",
+ "version": "1.0.0",
+ "packages": [
+ {
+ "name": "deviceupdate-agent"
+ }
+ ]
+ }
+ ```
+
+> [!NOTE]
+> It is required for the agent upgrade to be the last step. You may have other steps before the agent upgrade. Any steps added after the agent upgrade will not be executed and reported correctly as the device reconnects with the DU service.
++
+3. Deploy the update
+
+4. Once the update is successfully deployed, the device attributes will now show the updated PnP model details.The **Contract Model Name** will show **Device Update Model V2** and **Contract Model ID** will show **dtmi:azure:iot:deviceUpdateContractModel;2**.
+
+## Group and deployment behavior across releases
+
+- Device with the Public Preview Refresh DU agent ( 0.8.x) and GA DU agent (1.0.x) can be managed through the Device Update portal.
+
+- Devices with older agents (0.7.0/0.6.0) cannot be added to these groups.
++
+## Next steps
+[Understand Device Update agent configuration file](device-update-configuration-file.md)
+
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+
+- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+
+- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Related Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/related-files.md
+
+ Title: Related files for Device Update for Azure IoT Hub | Microsoft Docs
+description: Understand the Device Update for IoT Hub related files feature.
++ Last updated : 08/23/2022++++
+# Use the related files feature in Device Update for IoT Hub
+
+Use the related files feature when you need to express relationships between different update files in a single update.
+
+## What is the related files feature?
+
+When importing an update to Device Update for IoT Hub, an import manifest containing metadata about the update payload is required. The file-level metadata in the import manifest can be a flat list of update payload files in the simplest case. However, for more advanced scenarios, you can instead use the related files feature, which provides a way for files to have a relationship specified between them.
+
+When creating an import manifest using the related files feature, you can add a collection of _related_ files to one or more of your _primary_ payload files. An example of this concept is the Device Update [delta update](delta-updates.md) feature, which uses related files to specify a delta update that is associated with a full image file. In the delta scenario, the related files feature allows the full image and delta update to both be imported as a single update action, and then either one can be deployed to a device. However, the related files feature isn't limited to delta updates, since it's designed to be extensible by our customers depending on their own unique scenarios.
+
+### Example import manifest using related files
+
+Below is an example of an import manifest that uses the related files feature to import a delta update. In this example, you can see that in the `files` section, there's a full image specified (`full-image-file-name`) with a `properties` item. The `properties` item in turn has an associated `relatedFiles` item below it. Within the `relatedFiles` section, you can see another `properties` section for the delta update file (`delta-from-v1-file-name`), and also a `downloadHandler` item with the appropriate `id` listed (`microsoft/delta:1`).
+
+```json
+ {
+ "updateId": {
+ // provider, name, version
+ },
+ "compatibility": [
+ {
+ // manufacturer, model, etc.
+ }
+ ],
+ "instructions": {
+ "steps": [
+ // Inline steps...
+ ]
+ },
+ "files": [
+ {
+ // standard file properties
+ "fileName": "full-image-file-name",
+ "sizeInBytes": 12345,
+ "hashes": {
+ "SHA256": "full-image-file-hash"
+ },
+ "mimeType": "application/octet-stream",
+ // new properties
+ "properties ": {},
+ "relatedFiles": [
+ {
+ // delta from version 1.0.0.0
+ // standard file properties
+ "fileName": "delta-from-v1-file-name",
+ "sizeInBytes": 1234,
+ "hashes": {
+ "SHA256": "delta-from-v1-file-hash"
+ },
+ "mimeType": "application/octet-stream",
+ // new properties
+ "properties": {
+ "microsoft.sourceFileHash": "delta-source-file-hash",
+ "microsoft.sourceFileHashAlgorithm": "sha256"
+ }
+ }
+ ],
+ // handler to download/process our related files
+ "downloadHandler": {
+ "id": "microsoft/delta:1"
+ }
+ }
+ ],
+ "createdDateTime": "2021-12-01T01:12:21Z",
+ "manifestVersion": "5.0"
+ }
+```
+
+## How to use related files
+
+>[!NOTE]
+>The documentation on this page uses delta updates as an example of how to use related files. If you want to use delta updates as a _feature_, follow the [delta update documentation](delta-updates.md).
+
+### Related files properties
+
+In certain scenarios, you may want to provide extra metadata for the update handler on your device to know how to interpret and properly use the files that you've specified as related files. This metadata is added as part of a `properties` property bag to the `file` and `relatedFile` objects.
+
+### Specify a download handler
+
+When you use the related files feature, you need to specify how to process these related files to produce the target file. You specify the processing approach by including a `downloadHandler` property in your import manifest. Including `downloadHandler` is required if you specify a non-empty collection of `relatedFiles` in a `file` element. You can specify a `downloadHandler` using a simple `id` property. The Download handler `id` has a limit of 64 ASCII characters.
+
+## Next steps
+
+- Learn about [import manifest schema](import-schema.md)
+- Learn about [delta updates](delta-updates.md)
iot-hub-device-update Troubleshoot Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/troubleshoot-device-update.md
Title: Troubleshoot common Device Update for Azure IoT Hub issues | Microsoft Docs description: This document provides a list of tips and tricks to help remedy many possible issues you may be having with Device Update for IoT Hub.-+ Previously updated : 2/17/2021 Last updated : 9/13/2022 # Device Update for IoT Hub Troubleshooting Guide
-This document lists some common questions and issues Device Update users have reported. As Device Update progresses through Public Preview, this troubleshooting guide will be updated periodically with new questions and solutions. If you encounter an issue that does not appear in this troubleshooting guide, refer to the [Contacting Microsoft Support](#contact) section to document your situation.
+This document lists some common questions and issues Device Update users have reported. If you encounter an issue that does not appear in this troubleshooting guide, refer to the [Contacting Microsoft Support](#contact) section to document your situation.
## <a name="import"></a>Importing updates
-### Q: I'm having trouble connecting my Device Update instance to my IoT Hub instance.
-_Please ensure your IoT Hub message routes are configured correctly, as per the [Device Update resources](./device-update-resources.md) documentation._
+### Q: I'm having trouble connecting my Device Update instance to my IoT Hub instance
-### Q: I'm encountering a role-related error (error message in Azure portal or a 403 API error).
-_You may not have access permissions configured correctly. Please ensure you have configured access permissions correctly as per the [Device Update access control](./device-update-control-access.md) documentation._
+Please ensure your IoT Hub message routes are configured correctly, as per the [Device Update resources](./device-update-resources.md) documentation.
-### Q: I'm encountering a 500-type error when importing content to the Device Update service.
-_An error code in the 500 range may indicate an issue with the Device Update service. Please wait 5 minutes, then try again. If the same error persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
+### Q: I'm encountering a role-related error (error message in Azure portal or a 403 API error)
-### Q: I want to keep the same compatibility properties (target my update to the same device type), but change the Provider or Name in the import manifest. But I get an error "Failed: error importing update due to exceeded limit" when I do so.
-_The same exact set of compatibility properties cannot be used with more than one Update Provider and Name combination. This allows the Device Update service to determine with certainty which updates should be available to deploy to a given device. If you need to update multiple components or partitions on a single device, the [proxy updates](./device-update-proxy-updates.md) feature provides that capability._
+You may not have access permissions configured correctly. Please ensure you have configured access permissions correctly as per the [Device Update access control](./device-update-control-access.md) documentation.
-### Q: I'm encountering an error message when importing content and would like to understand more about it.
-_Please refer to the [Device Update Error Codes](./device-update-error-codes.md#device-update-content-service) documentation for more detailed information on import-related error messages._
+### Q: I'm encountering a 500-type error when importing content to the Device Update service
+
+An error code in the 500 range may indicate an issue with the Device Update service. Please wait 5 minutes, then try again. If the same error persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft.
+
+### Q: I want to keep the same compatibility properties (target my update to the same device type), but change the Provider or Name in the import manifest. But I get an error "Failed: error importing update due to exceeded limit" when I do so
+
+The same exact set of compatibility properties cannot be used with more than one Update Provider and Name combination. This allows the Device Update service to determine with certainty which updates should be available to deploy to a given device. If you need to update multiple components or partitions on a single device, the [proxy updates](./device-update-proxy-updates.md) feature provides that capability.
+
+### Q: I'm encountering an error message when importing content and would like to understand more about it
+
+Please refer to the [Device Update Error Codes](./device-update-error-codes.md#device-update-content-service) documentation for more detailed information on import-related error messages.
## <a name="device-failure"></a>Device failures ### Q: How can I ensure my device is connected to Device Update for IoT Hub?
-_You can verify that your device is connected to Device Update by checking if it shows up under the "Ungrouped" devices section in the compliance view of Azure portal._
-### Q: One or more of my devices is failing to update.
-_There are many possible root causes for a device update failure. Please validate that the device is: 1) connected to your IoT Hub instance, 2) connected to your Device Update instance, and 3) the Delivery Optimization (DO) service is running. If all three are true for your device, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
+You can verify that your device is connected to Device Update by checking if it shows up under the "Ungrouped" devices section in the compliance view of Azure portal.
+
+### Q: One or more of my devices is failing to update
+
+There are many possible root causes for a device update failure. Please validate that the device is: 1) connected to your IoT Hub instance, 2) connected to your Device Update instance, and 3) the Delivery Optimization (DO) service is running. If all three are true for your device, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft.
+
+### Q: My Device Update agent is failing to start up
+
+One of the most common reasons for a failure in Device Update agent start-up is a malformed configuration file (du-config.json). Please refer to the [configuration file documentation](./device-update-configuration-file.md) and ensure your agent is configured correctly. Note that all values in the configuration file must use double-quotes.
## <a name="deploy"></a> Deploying an update ### Q: I've deployed an update to my device(s), but the compliance status says it isn't on the latest update. What should I do?
-_The device compliance status can take up to 5 minutes to refresh. Please wait, then check again._
+
+The device compliance status can take up to 5 minutes to refresh. Please wait, then check again.
+ ### Q: My device's deployment status shows incompatible, what should I do?
-_The manufacturer and model properties of a targeted device may have been changed after connecting the device to IoT Hub, causing the device to now be considered incompatible with the update content of the current deployment._
-_Check the [ADU Core Interface](./device-update-plug-and-play.md) to see what manufacturer and model your device is reporting to the Device Update service, and make sure it matches the manufacturer and model you specified in the [import manifest](./import-concepts.md) of the update content being deployed. You can change these properties for a given device using the [Device Update configuration file](./device-update-configuration-file.md)._
+The manufacturer and model properties of a targeted device may have been changed after connecting the device to IoT Hub, causing the device to now be considered incompatible with the update content of the current deployment.
+
+Check the [ADU Core Interface](./device-update-plug-and-play.md) to see what manufacturer and model your device is reporting to the Device Update service, and make sure it matches the manufacturer and model you specified in the [import manifest](./import-concepts.md) of the update content being deployed. You can change these properties for a given device using the [Device Update configuration file](./device-update-configuration-file.md).
### Q: I see my deployment is in "Active" stage but none of my devices are "In progress" with the update. What should I do?
-_Ensure that your deployment start date is not set in the future. When you create a new deployment, the deployment start date is defaulted to the next day as a safeguard unless you explicitly change it. You can either wait for the deployment start date to arrive, or cancel the ongoing deployment and create a new deployment with the desired start date._
-### Q: I'm trying to group my devices, but I don't see the tag in the drop-down when creating a group.
-_Ensure that you have correctly configured the message routes in your IoT Hub as per the [Device Update resources](./device-update-resources.md) documentation. You will have to tag your device again after configuring the route._
+Ensure that your deployment start date is not set in the future. When you create a new deployment, the deployment start date is defaulted to the next day as a safeguard unless you explicitly change it. You can either wait for the deployment start date to arrive, or cancel the ongoing deployment and create a new deployment with the desired start date.
-_Another root cause could be that you applied the tag before connecting your device to Device Update for IoT Hub. Ensure that your device is already connected to Device Update. You can verify that your device is connected to Device Update for IoT Hub by checking if it shows up under ΓÇ£UngroupedΓÇ¥ devices in the compliance view. Temporarily add a tag of a different value, and then add your intended tag again once the device is connected._
+### Q: I'm trying to group my devices, but I don't see the tag in the drop-down when creating a group
-_If you are using Device Provisioning Service (DPS), then ensure that you tag your devices after they are provisioned and not during the Device creation process. If you have already tagged your device during the Device creation step, then you will have to temporarily tag your device with a different value after it is provisioned, and then add your intended tag again._
+Ensure that you have correctly configured the message routes in your IoT Hub as per the [Device Update resources](./device-update-resources.md) documentation. You will have to tag your device again after configuring the route.
-### Q: My deployment completed successfully, but some devices failed to update.
-_This may have been caused by a client-side error on the failed devices. Please see the Device Failures section of this troubleshooting guide._
+Another root cause could be that you applied the tag before connecting your device to Device Update for IoT Hub. Ensure that your device is already connected to Device Update. You can verify that your device is connected to Device Update for IoT Hub by checking if it shows up under ΓÇ£UngroupedΓÇ¥ devices in the compliance view. Temporarily add a tag of a different value, and then add your intended tag again once the device is connected.
-### Q: I encountered an error in the UX when trying to initiate a deployment.
-_This may have been caused by a service/UX bug, or by an API permissions issue. Please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
+If you are using Device Provisioning Service (DPS), then ensure that you tag your devices after they are provisioned and not during the Device creation process. If you have already tagged your device during the Device creation step, then you will have to temporarily tag your device with a different value after it is provisioned, and then add your intended tag again.
-### Q: I started a deployment but it isnΓÇÖt reaching an end state.
-_This may have been caused by a service performance issue, a service bug, or a client bug. Please retry your deployment after 10 minutes. If you encounter the same issue, please pull your device logs and refer to the Device Failures section of this troubleshooting guide. If the same issue persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
+### Q: My deployment completed successfully, but some devices failed to update
-### Q: I migrated from a device level agent to adding the agent as a Module identity on the device, and my update shows as 'in-progress' even though it has been applied to the device.
-_This may have been caused if you did not remove the older agent that was communicating over the Device Twin. When you provision the Device Update agent as a Module (see [how to](device-update-agent-provisioning.md)) all communications between the device and the Device Update service happen over the Module Twin so do remember to tag the Module Twin of the device when creating [groups](device-update-groups.md) and all [communications](device-update-plug-and-play.md) must happen over the module twin.
+This may have been caused by a client-side error on the failed devices. Please see the Device Failures section of this troubleshooting guide.
+
+### Q: I encountered an error in the UX when trying to initiate a deployment
+
+This may have been caused by a service/UX bug, or by an API permissions issue. Please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft.
+
+### Q: I started a deployment but it isnΓÇÖt reaching an end state
+
+This may have been caused by a service performance issue, a service bug, or a client bug. Please retry your deployment after 10 minutes. If you encounter the same issue, please pull your device logs and refer to the Device Failures section of this troubleshooting guide. If the same issue persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft.
+
+### Q: I migrated from a device level agent to adding the agent as a Module identity on the device, and my update shows as 'in-progress' even though it has been applied to the device
+
+This may have been caused if you did not remove the older agent that was communicating over the Device Twin. When you provision the Device Update agent as a Module (see [how to](device-update-agent-provisioning.md)) all communications between the device and the Device Update service happen over the Module Twin so do remember to tag the Module Twin of the device when creating [groups](device-update-groups.md) and all [communications](device-update-plug-and-play.md) must happen over the module twin.
## <a name="download"></a> Downloading updates onto devices ### Q: How do I resume a download when a device has reconnected after a period of disconnection?
-_The download will self-resume when connectivity is restored within a 24-hour period. After 24 hours, the download will need to be reinitiated by the user._
+
+The download will self-resume when connectivity is restored within a 24-hour period. After 24 hours, the download will need to be reinitiated by the user.
+ ## <a name="mcc"></a> Using Microsoft Connected Cache (MCC)
-### Q: I am encountering an issue when attempting to deploy the MCC module on my IoT Edge device.
-_Refer to the [IoT Edge documentation]() for deploying Edge modules to IoT Edge devices. You can check if the MCC module is running successfully on your IoT Edge device by navigating to http://localhost:5100/Summary._
-### Q: One of my IoT devices is attempting to download an update through MCC, but is failing.
-_There are several issues that could be causing an IoT device to fail in connecting to MCC. In order to diagnose the issue, please collect the DO client and Nginx logs from the failing device (see the [Contacting Microsoft Support](#contact) section for instructions on gathering client logs)._
+### Q: I am encountering an issue when attempting to deploy the MCC module on my IoT Edge device
+
+Refer to the [IoT Edge documentation](../iot-edge/index.yml) for deploying Edge modules to IoT Edge devices. You can check if the MCC module is running successfully on your IoT Edge device by navigating to http://localhost:5100/Summary.
+
+### Q: One of my IoT devices is attempting to download an update through MCC, but is failing
+
+There are several issues that could be causing an IoT device to fail in connecting to MCC. In order to diagnose the issue, please collect the DO client and Nginx logs from the failing device (see the [Contacting Microsoft Support](#contact) section for instructions on gathering client logs).
+
+Your device may be failing to pull content from the Internet to pass to its MCC module because the URL itΓÇÖs using isnΓÇÖt allowed. To determine if so, you will need to check your IoT Edge environment variables in Azure portal.
+
+## <a name="instance"></a> Troubleshooting a missing instance in the Azure portal
+
+### Q: I donΓÇÖt see an instance of Device Update for IoT Hub when I select the "gear" icon
+
+There are a few possible causes for this issue. See below for troubleshooting steps.
+
+A Device Update instance needs to be associated with an Azure IoT hub in the same resource group and subscription. If youΓÇÖve moved either your Device Update instance or your hub to a different resource group or subscription, you may not see your instance in the Azure portal. YouΓÇÖll need to do one of the following steps in order to continue using Device Update for IoT Hub:
+
+- Return the moved item(s) to their original configuration.
+- If you only moved your IoT hub from one resource group to another, modify your Device Update instance with the IoT hubΓÇÖs new resourceId.
+- If you moved item(s) from one subscription to another, make sure the Device Update account and IoT hub are in the same subscription, and then modify your Device Update instance with the IoT hubΓÇÖs new resourceId.
+
+At least Read-level permissions are needed for both your IoT hub and your Device Update for IoT Hub account in order to access Device Update functionality via the IoT hub experience in the Azure portal.
+
+- To manage permissions for your IoT Hub:
+ - Select your hub from the Azure portal
+ - Select ΓÇ£Access control (IAM) from the left-hand navigation bar.
+ - Select ΓÇ£Add role assignmentΓÇ¥.
+ - Select a role with at least Read access and select Next.
+ - Next to ΓÇ£MembersΓÇ¥, select ΓÇ£+Select membersΓÇ¥.
+ - Add your account in the right-hand flyout, and select the ΓÇ£SelectΓÇ¥ button.
+ - Select ΓÇ£Review + assignΓÇ¥.
+- To manage permissions for your Device Update for IoT Hub account, ask the owner of the account to take these steps:
+ - Select your Device Update account from the Azure portal.
+ - Select ΓÇ£Access control (IAM) from the left-hand navigation bar.
+ - Select ΓÇ£Add role assignmentΓÇ¥.
+ - Select the Reader role (or one with equivalent permissions).
+ - Next to ΓÇ£MembersΓÇ¥, select ΓÇ£+Select membersΓÇ¥.
+ - Add your account in the right-hand flyout, and select the ΓÇ£SelectΓÇ¥ button.
+ - Select ΓÇ£Review + assignΓÇ¥.
+
+Learn more about [role-based access control](device-update-control-access.md) for the Device Update service.
-_Your device may be failing to pull content from the Internet to pass to its MCC module because the URL itΓÇÖs using isnΓÇÖt allowed. To determine if so, you will need to check your IoT Edge environment variables in Azure portal._
## <a name="contact"></a> Contacting Microsoft Support
-If you run into issues that can't be resolved using the FAQs above, you can file a support request with Microsoft Support through the Azure portal interface. Depending on which category you indicate your issue belongs to, you may be asked to gather and share additional data to help Microsoft Support investigate your issue.
+If you run into issues that can't be resolved using the FAQs above, you can file a support request with Microsoft Support through the Azure portal interface. Depending on which category you indicate your issue belongs to, you may be asked to gather and share additional data to help Microsoft Support investigate your issue.
-Please see below for instructions on how to gather each data type. You can use [getDevices]() to check for
-additional information in the payload response of the API.
+Please see below for instructions on how to gather each data type.
+
+You can use [getDevice](/dotnet/api/azure.iot.deviceupdate.devicemanagementclient.getdevice?view=azure-dotnet-preview&preserve-view=true) to check for additional information in the payload response of the API.
In addition, the following information can be useful for narrowing down the root cause of your issue:
-* What type of device you are attempting to update (Azure Percept, IoT Edge Gateway, other)
-* What Device Update client type you are using (Image-based, Package-based, Simulator)
-* What OS your device is running
-* Details regarding your device's architecture
-* Whether you have successfully used Device Update to update a device before
+
+- What type of device you are attempting to update (Azure Percept, IoT Edge Gateway, other)
+- What Device Update client type you are using (Image-based, Package-based, Simulator)
+- What OS your device is running
+- Details regarding your device's architecture
+- Whether you have successfully used Device Update to update a device before
If you have any of the above information available, please include it in your description of the issue. ### Collecting client logs
-* On the Raspberry Pi Device there are two sets of logs found here:
+- On the Raspberry Pi Device there are two sets of logs found here:
```markdown /adu/logs
If you have any of the above information available, please include it in your de
/var/cache/do-client-lite/log ```
-* For the packaged client the logs are found here:
+- For the packaged client the logs are found here:
```markdown /var/log/adu
If you have any of the above information available, please include it in your de
/var/cache/do-client-lite/log ```
-* For the Simulator, the logs are found here:
+- For the Simulator, the logs are found here:
```markdown /tmp/aduc-logs ``` ### Error codes+ You may be asked to provide error codes when reporting an issue related to importing an update, a device failure, or deploying an update. Error codes can be obtained by looking at the [ADUCoreInterface](./device-update-plug-and-play.md) interface. Please refer to the [Device Update error codes](./device-update-error-codes.md) documentation for information on how to parse error codes for self-diagnosis and troubleshooting. ### Trace ID+ You may be asked to provide a trace ID when reporting an issue related to importing or deploying an update.
-The trace ID for a given user-action can be found within the API response, or in the Import History section of the Azure portal user interface.
+The trace ID for a given user-action can be found within the API response, or in the Import History section of the Azure portal user interface.
Currently, trace IDs for deployment actions are only accessible through the API response. ### Deployment ID+ You may be asked to provide a deployment ID when reporting an issue related to deploying an update. The deployment ID is created by the user when calling the API to initiate a deployment.
The deployment ID is created by the user when calling the API to initiate a depl
Currently, deployment IDs for deployments initiated from the Azure portal user interface are automatically generated and not surfaced to the user. ### IoT Hub instance name+ You may be asked to provide your IoT Hub instance's name when reporting an issue related to device failures or deploying an update. The IoT Hub name is chosen by the user when first provisioned. ### Device Update account name+ You may be asked to provide your Device Update account's name when reporting an issue related to importing an update, device failures, or deploying an update. The Device Update account name is chosen by the user when first signing up for the service. More information can be found in the [Device Update resources](./device-update-resources.md) documentation. ### Device Update instance name+ You may be asked to provide your Device Update instance's name when reporting an issue related to importing an update, device failures, or deploying an update. The Device Update instance name is chosen by the user when first provisioned. More information can be found in the [Device Update resources](./device-update-resources.md) documentation. ### Device ID+ You may be asked to provide a device ID when reporting an issue related to device failures or deploying an update. The device ID is defined by the customer when the device is first provisioned. It can also be retrieved from the device's Device Twin. ### Update ID+ You may be asked to provide an update ID when reporting an issue related to deploying an update. The update ID is defined by the customer when initiating a deployment. ### Nginx logs+ You may be asked to provide Nginx logs when reporting an issue related to Microsoft Connected Cache. ### ADU-conf.txt+ You may be asked to provide the Device Update configuration file ("adu-conf.txt") when reporting an issue related to deploying an update. The configuration file is optional and created by the user following the instructions in the [Device Update configuration](./device-update-configuration-file.md) documentation. ### Import manifest+ You may be asked to provide your import manifest file when reporting an issue related to importing or deploying an update. The import manifest is a file created by the customer when importing update content to the Device Update service.
-**[Next Step: Learn more about Device Update error codes](.\device-update-error-codes.md)**
+## Next steps
+
+[Learn more about Device Update error codes](.\device-update-error-codes.md)
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/understand-device-update.md
Title: Introduction to Device Update for Azure IoT Hub | Microsoft Docs
-description: Device Update for IoT Hub is a service that enables you to deploy over-the-air updates (OTA) for your IoT devices.
+ Title: Introduction to Device Update for Azure IoT Hub
+description: Device Update for IoT Hub is a service that enables you to deploy over-the-air updates for your IoT devices.
Previously updated : 2/11/2021 Last updated : 10/31/2022
-# Device Update for IoT Hub (Preview) Overview
+# What is Device Update for IoT Hub?
-Device Update for IoT Hub is a service that enables you to deploy over-the-air updates (OTA) for your IoT devices.
+Device Update for Azure IoT Hub is a service that enables you to deploy over-the-air updates for your IoT devices.
-As organizations look to further enable productivity and operational efficiency, Internet of Things (IoT) solutions continue to be adopted at increasing rates. This makes it essential that the devices forming these solutions are built on a foundation of reliability and security and are easy to connect and manage at scale. Device Update for IoT Hub is an end-to-end platform that customers can use to publish, distribute, and manage over-the-air updates for everything from tiny sensors to gateway-level devices.
+As Internet of Things (IoT) solutions continue to be adopted at increasing rates, it's essential that the devices forming these solutions are easy to connect and manage at scale. Device Update for IoT Hub is an end-to-end platform that customers can use to publish, distribute, and manage over-the-air updates for everything from tiny sensors to gateway-level devices.
-To realize the full benefits of IoT-enabled digital transformation, customers need this ability to operate, maintain, and update devices at scale. Explore the benefits of implementing Device Update for IoT Hub, which include being able to rapidly respond to security threats and deploy new features to obtain business objectives without incurring the extra development and maintenance costs of building your own update platforms.
+To realize the full benefits of IoT-enabled digital transformation, customers need the ability to operate, maintain, and update devices at scale. Device Update for IoT Hub unlocks capabilities like:
+
+* Rapidly responding to security threats
+* Deploying new features to obtain business objectives
+* Avoiding the extra development and maintenance costs of building your own update platforms.
## Support for a wide range of IoT devices
+Device Update for IoT Hub offers optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We're codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get started guides to learn how to configure, build, and deploy the over-the-air updates to MCU class devices.
-Device Update for IoT Hub is designed to offer optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We are codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that includes the get started guides to learn how to configure, build, and deploy the over-the-air (OTA) updates to MCU class devices.
+Both a Device Update agent simulator binary and Raspberry Pi reference Yocto images are provided.
+Device Update agents are built and provided for Ubuntu Server 18.04, Ubuntu Server 20.04, and Debian 10. Device Update for IoT Hub also provides open-source code if you aren't
+running one of the above platforms. You can port the agent to the distribution you're running.
-Both a Device Update Agent Simulator binary and Raspberry Pi reference Yocto images are provided.
-Device Update for IoT Hub also supports updating Azure IoT Edge devices. A Device Update Agent is provided for Ubuntu Server 18.04 amd64
-platform. Device Update for IoT Hub also provides open-source code if you are not
-running one of the above platforms. You can port the agent to the distribution you
-are running.
+Device Update for IoT Hub also supports updating Azure IoT Edge devices.
-Device Update works with IoT Plug and Play and can manage any device that supports
-the required IoT Plug and Play interfaces. For more information, see [Device Update for IoT Hub and
-IoT Plug and Play](device-update-plug-and-play.md).
+Device Update works with IoT Plug and Play and can manage any device that supports the required IoT Plug and Play interfaces. For more information, see [Device Update for IoT Hub and IoT Plug and Play](device-update-plug-and-play.md).
## Support for a wide range of update artifacts
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based
-and package-based.
+Device Update for IoT Hub supports two forms of updates ΓÇô package-based and image-based.
-Package-based updates are targeted updates that alter only a specific component
-or application on the device. This leads to lower consumption of
-bandwidth and helps reduce the time to download and install the update. Package
-updates typically allow for less downtime of devices when applying an update and
-avoid the overhead of creating images.
+*Package-based updates* are targeted updates that alter only a specific component or application on the device. This update type leads to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
-Image updates provide a higher level of confidence in the end-state
-of the device. It is typically easier to replicate the results of an
-image-update between a pre-production environment and a production environment,
-since it doesnΓÇÖt pose the same challenges as packages and their dependencies.
-Due to their atomic nature, one can also adopt an A/B failover model easily.
+*Image-based updates* provide a higher level of confidence in the end-state of the device. It's typically easier to replicate the results of an image update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to the atomic nature of image updates, one can also adopt an A/B failover model easily.
-There is no one right answer, and you might choose differently based on
-your specific use cases. Device Update for IoT Hub supports both image and package
-form of updating, allowing you to choose the right updating model
-for your device environment.
+There's no one right answer, and you might choose differently based on your specific use cases. Device Update for IoT Hub supports both image and package forms of updating, allowing you to choose the right updating model for your device environment.
## Flexible features for updating devices
-Device Update for IoT Hub features provide a powerful and flexible experience, including:
+Device Update for IoT Hub provides powerful and flexible features, including:
+
+* Management and reporting tools.
+
+ * An update management experience that is integrated with Azure IoT Hub.
+ * Programmatic APIs to enable automation and custom portal experiences.
+ * Subscription- and role-based access controls available through the Azure portal.
+ * At-a-glance update compliance and status views across heterogenous device fleets.
+ * Azure CLI support for creating and managing Device Update resources, groups, and deployments from the command line.
+
+* Detailed control over the update deployment process.
+
+ * Gradual update rollout through device grouping and update scheduling controls.
+ * Support for resilient device updates (A/B) to deliver seamless rollback.
+ * Automatic rollback to a defined fallback version for managed devices that meet the rollback criteria.
+ * Delta updates (public preview) that allow you to generate smaller updates that represent only the changes between the current image and target image, which can reduce bandwidth for downloading updates to devices.
+
+* Troubleshooting features to help you diagnose and repair devices, including agent check and device sync.
-* Update management UX integrated with Azure IoT Hub
-* Gradual update rollout through device grouping and update scheduling controls
-* Programmatic APIs to enable automation and custom portal experiences
-* At-a-glance update compliance and status views across heterogenous device fleets
-* Support for resilient device updates (A/B) to deliver seamless rollback
-* Subscription and role-based access controls available through the Azure.com portal
-* On-premises content cache and Nested Edge support to enable updating cloud disconnected devices
-* Detailed update management and reporting tools
+* On-premises content cache and nested edge support to enable updating cloud disconnected devices.
-With Device Update for IoT Hub management and deployment controls, users can maximize productivity and save valuable time. Device Update for IoT Hub includes the ability to group devices and specify
-to which devices an update should be deployed. Users also can view the status of update deployments and make sure each device successfully applies updates.
+* Automatic grouping of devices based on their compatibility properties and device twin tags.
-When an update failure happens, Device Update for IoT Hub also allows users to identify the devices that failed to apply the update plus see related failure details. The ability to identify which devices failed to update means countless manual hours saved trying to pinpoint the source.
+With Device Update for IoT Hub management and deployment controls, users can maximize productivity and save valuable time. Device Update for IoT Hub includes the ability to group devices and specify to which devices an update should be deployed. Users also can view the status of deployments and make sure each device successfully applies updates.
+
+When an update failure happens, Device Update for IoT Hub helps users to identify the devices that failed to apply the update and see related failure details. The ability to identify which devices failed to update means countless manual hours saved trying to pinpoint the source.
### Best-in-class security at global scale Microsoft Azure supports more than a billion IoT devices around the worldΓÇöa number thatΓÇÖs growing rapidly by the day. Device Update for IoT Hub builds upon this experience and the proven reliability demonstrated by the Windows Update platform, so devices can be seamlessly updated on a global scale.
-Device Update for IoT Hub uses comprehensive cloud-to-edge security that is developed for Microsoft Azure, so customers donΓÇÖt need to spend time figuring out how to build it in themselves from the ground up.
-
+Device Update for IoT Hub uses comprehensive cloud-to-edge security developed for Microsoft Azure, so customers donΓÇÖt need to spend time figuring out how to build it themselves from the ground up. For more information, see [Device Update security model](device-update-security.md).
## Device Update workflows
-Device Update functionality can be broken down into three areas: Agent Integration,
-Importing, and Management.
+Device Update functionality can be broken down into three areas: agent integration, importing, and management.
-### Device Update Agent
+### Device Update agent
-When an update command is received on a device, it will execute the requested
-phase of updating (either Download, Install and Apply). During each phase,
-status is returned to Device Update via IoT Hub so you can view the current status of a
-deployment. If there are no updates in progress, the status is returned as ΓÇ£IdleΓÇ¥. A deployment can be canceled at any time.
+When an update command is received on a device, the *Device Update agent* executes the requested phase of updating (either Download, Install and Apply). During each phase, the agent returns the deployment status to Device Update via IoT Hub so you can view the current status of a deployment. If there are no updates in progress, the status is returned as ΓÇ£IdleΓÇ¥. A deployment can be canceled at any time.
:::image type="content" source="media/understand-device-update/client-agent-workflow.png" alt-text="Diagram of Device Update agent workflow." lightbox="media/understand-device-update/client-agent-workflow.png":::
-[Learn More](device-update-agent-overview.md) about device update agent.
+For more information, see [Device Update for IoT Hub agent overview](device-update-agent-overview.md).
### Importing
-Importing is how your updates are ingested into Device Update so they can be deployed to devices. Device Update supports rolling out a single update per device. This makes it ideal for
-full-image updates that update an entire OS partition at once, or an [APT manifest](device-update-apt-manifest.md) that describes all the packages you want to update
-on your device from a designated repository. To import updates into Device Update, you first create an import manifest
-describing the update, then upload the update file(s) and the import
-manifest to an Azure Storage container. After that, you can use the Azure portal or the [Device Update
-REST API](/rest/api/deviceupdate/) to initiate the asynchronous process of update import. Device Update uploads the files, processes
-them, and makes them available for distribution to IoT devices.
+*Importing* is how your updates are ingested into Device Update so they can be deployed to devices. Device Update supports rolling out a single update per device. This support makes it ideal for full-image updates that update an entire OS partition, or an [APT manifest](device-update-apt-manifest.md) that describes the individual packages you want to update on your device.
+
+To import updates into Device Update, you first create an import manifest describing the update, then upload the update file(s) and the import manifest to an Azure Storage container. After that, you can use the Azure portal or the [Device Update REST API](/rest/api/deviceupdate/) to initiate the asynchronous process of update import. Device Update uploads the files, processes them, and makes them available for distribution to IoT devices.
-For sensitive content, protect the download using a shared access signature (SAS), such as an ad-hoc SAS for Azure Blob Storage. [Learn more about
-SAS](../storage/common/storage-sas-overview.md)
+For sensitive content, protect the download using a shared access signature (SAS), such as an ad-hoc SAS for Azure Blob Storage. For more information, see [Grant limited access to Azure Storage resources using SAS](../storage/common/storage-sas-overview.md).
:::image type="content" source="media/understand-device-update/import-update.png" alt-text="Diagram of Device Update for IoT Hub importing workflow." lightbox="media/understand-device-update/import-update.png":::
-[Learn More](import-concepts.md) about importing updates.
+For more information, see [Import updated into Device Update for IoT Hub](import-concepts.md).
### Grouping and deployment After importing an update, you can view compatible updates for your devices and device classes.
-Device Update supports the concept of **Groups** via tags in IoT Hub. Deploying an update
-out to a test group first is a good way to reduce the risk of issues during a
-production rollout.
+Device Update supports the concept of *groups* via tags in IoT Hub. Deploying an update to a test group first is a good way to reduce the risk of issues during a production rollout.
-In Device Update, deployments are a way of connecting the
-right content to a specific set of compatible devices. Device Update orchestrates the
-process of sending commands to each device, instructing them to download and
-install the updates and getting status back.
+In Device Update, *deployments* are a way of connecting the
+right content to a specific set of compatible devices. Device Update orchestrates the process of sending commands to each device, instructing them to download and install the updates and getting status back.
:::image type="content" source="media/understand-device-update/manage-deploy-updates.png" alt-text="Diagram of Device Update for IoT Hub grouping and deployment workflow." lightbox="media/understand-device-update/manage-deploy-updates.png":::
-[Learn more](device-update-compliance.md) about deployment concepts
-
-[Learn more](device-update-groups.md) about device update groups
+For more information about deployment concepts, see [Device Update compliance](device-update-compliance.md).
+For more information about Device Update groups, see [Device groups](device-update-groups.md).
## Next steps
-> [!div class="nextstepaction"]
-> [Create device update account and instance](create-device-update-account.md)
+Get started with Device Update by trying a sample:
+
+[Tutorial: Device Update using the simulator agent](device-update-simulator.md)
iot-hub-device-update Update Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/update-manifest.md
Each manifest type has its own schema and schema version.
}, "compatibility": [ {
- "deviceManufacturer": "Contoso",
- "deviceModel": "Toaster"
+ "manufacturer": "Contoso",
+ "model": "Toaster"
} ], "instructions": {
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
Using multiple frontends allow you to load balance traffic on more than one port
### Transition between regional zonal models
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, replicate the appropriate rules and configurations to utilize these new IPs.
+In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, you can replace the existing non-zonal frontend with a new zone-redundant frontend using the method described [here](../virtual-network/ip-services/configure-public-ip-load-balancer.md#change-or-remove-public-ip-address). All existing load balancing and NAT rules will transition to the new frontend.
### Control vs data plane implications
Review [Azure cloud design patterns](/azure/architecture/patterns/) to improve t
- Learn more about [Standard Load Balancer](./load-balancer-overview.md) - Learn how to [load balance VMs within a zone using a zonal Standard Load Balancer](./quickstart-load-balancer-standard-public-cli.md) - Learn how to [load balance VMs across zones using a zone redundant Standard Load Balancer](./quickstart-load-balancer-standard-public-cli.md)-- Learn about [Azure cloud design patterns](/azure/architecture/patterns/) to improve the resiliency of your application to failure scenarios.
+- Learn about [Azure cloud design patterns](/azure/architecture/patterns/) to improve the resiliency of your application to failure scenarios.
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
Previously updated : 08/30/2022 Last updated : 09/21/2022 # Service limits in Azure Load Testing Preview
In this section, you learn about the default and maximum quota limits.
The following limits apply on a per-region, per-subscription basis.
-| Resource | Limit |
+| Resource | Default limit | Maximum limit |
|||
-| Concurrent engine instances | 100 |
-| Engine instances per test run | 45 |
+| Concurrent engine instances | 5-100 <sup>1</sup> | 5000 |
+| Engine instances per test run | 1-45 <sup>1</sup> | 5000 |
+
+<sup>1</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
### Test runs The following limits apply on a per-region, per-subscription basis.
-| Resource | Limit |
+| Resource | Default limit | Maximum limit |
|||
-| Concurrent test runs | 25 |
+| Concurrent test runs | 5-25 <sup>2</sup> | 5000 |
| Test duration | 3 hours |
+<sup>2</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
+ ### Data retention Azure Load Testing captures metrics, test results, and logs for each test run. The following data retention limits apply:
Azure Load Testing captures metrics, test results, and logs for each test run. T
To raise the limit or quota above the default limit, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) at no charge.
-1. Select **create a support ticket**.
+1. Select **Create a support ticket**.
+
+1. Provide a **summary** of your issue.
+
+1. Select **Issue type** as *Service and subscription limits (quotas)*.
+
+1. Select your subscription. Then, select **Quota Type** as *Azure Load Testing - Preview*.
+
+1. Select **Next** to continue.
+
+1. In **Problem details**, select **Enter details**.
+
+1. On the **Quota details** pane, for **Location**, enter the Azure region where you want to increase the limit.
-1. Provide a summary of your issue.
+1. Select the **Quota type** for which you want to increase the limit.
-1. Select **Issue type** as *Technical*.
+1. Enter the **New limit requested** and select **Save and continue**.
-1. Select your subscription. Then, select **Service Type** as *Azure Load Testing - Preview*.
+1. Fill the details for **Advanced diagnostic information**, **Support method**, and **Contact information**.
-1. Select **Problem type** as *Test Execution*.
+1. Select **Next** to continue.
-1. Select **Problem subtype** as *Provisioning stalls or fails*.
+1. Select **Create** to submit the support request.
## Next steps
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-access-data-batch-endpoints-jobs.md
# Accessing data from batch endpoints jobs
-Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from to.
+Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from and how to reference it.
## Prerequisites
Batch endpoints can be used to perform batch scoring on large amounts of data. S
## Supported data inputs
-Batch endpoints support reading files or folders that are located in different locations:
+Batch endpoints support reading files located in tje following storage options:
* Azure Machine Learning Data Stores. The following stores are supported: * Azure Blob Storage
Batch endpoints support reading files or folders that are located in different l
## Reading data from data stores
-We're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
+Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
We're going to first upload some data to the default data store in the Azure Mac
## Reading data from a data asset
-Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
+Azure Machine Learning data assets (formaly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
> [!WARNING]
-> Data assets of type Table (`MLTable`) isn't currently supported.
+> Data assets of type Table (`MLTable`) aren't currently supported.
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
Follow these steps to run a batch endpoint job using data stored in a registered
## Reading data from Azure Storage Accounts
-Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts. Both public and private cloud locations are supported. Use the following steps to run a batch endpoint job using data stored in a storage account:
+Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account:
+
+> [!NOTE]
+> Check the section [Security considerations when reading data](#security-considerations-when-reading-data) for learn more about additional configuration required to successfully read data from storage accoutns.
1. Create a data input:
Batch endpoints ensure that only authorized users are able to invoke batch deplo
| Data store | Yes | Data store's credentials in the workspace | Credentials | | Data store | No | Identity of the job | Depends on type | | Data asset | Yes | Data store's credentials in the workspace | Credentials |
-| Data asset | No | Identity of the job + Managed identity of the compute cluster | Depends on store |
+| Data asset | No | Identity of the job | Depends on store |
| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC | | Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX | | Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
The managed identity of the compute cluster is used for mounting and configuring
> [!NOTE] > To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](../how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
# Authentication on batch endpoints
-Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
+Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
## Prerequisites
Batch endpoints support Azure Active Directory authentication, or `aad_token`. T
## How authentication works
-To invoke a batch endpoint, the user must present a valid Azure Active Directory token representing a security principal. This principal can be a __user principal__ or a __service principal__. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
+To invoke a batch endpoint, the user must present a valid Azure Active Directory token representing a __security principal__. This principal can be a __user principal__ or a __service principal__. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
> [!div class="checklist"] > * Read batch endpoints/deployments.
In this case, we want to execute a batch endpoint using the identity of the user
1. Once authenticated, use the following command to run a batch deployment job: ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
``` # [Azure ML SDK for Python](#tab/sdk)
In this case, we want to execute a batch endpoint using the identity of the user
```python job = ml_client.batch_endpoints.invoke( endpoint_name,
- input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
) ``` # [REST](#tab/rest)
-When working with REST APIs, we recommend to using either a service principal or a managed identity to interact with the API.
+When working with REST APIs, we recommend to using either a [service principal](#running-jobs-using-a-service-principal) or a [managed identity](#running-jobs-using-a-managed-identity) to interact with the API.
In this case, we want to execute a batch endpoint using a service principal alre
# [Azure ML CLI](#tab/cli)
-1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
-1. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
```bash az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>
In this case, we want to execute a batch endpoint using a service principal alre
1. Once authenticated, use the following command to run a batch deployment job: ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/
``` # [Azure ML SDK for Python](#tab/sdk)
In this case, we want to execute a batch endpoint using a service principal alre
```python job = ml_client.batch_endpoints.invoke( endpoint_name,
- input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
) ```
You can use the REST API of Azure Machine Learning to start a batch endpoints jo
__Request__:
- ```Body
- POST /{TENANT_ID}/oauth2/token
- Host:https://login.microsoftonline.com
+ ```http
+ POST /{TENANT_ID}/oauth2/token HTTP/1.1
+ Host: login.microsoftonline.com
+ ```
+
+ __Body__:
+
+ ```
grant_type=client_credentials&client_id=<CLIENT_ID>&client_secret=<CLIENT_SECRET>&resource=https://ml.azure.com ```
You can use the REST API of Azure Machine Learning to start a batch endpoints jo
"InputData": { "mnistinput": { "JobInputType" : "UriFolder",
- "Uri": "https://pipelinedata.blob.core.windows.net/sampledata/mnist"
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
} } }
You can use the REST API of Azure Machine Learning to start a batch endpoints jo
### Running jobs using a managed identity
+You can use managed identities to invoke batch endpoint and deployments. Please notice that this manage identity doesn't belong to the batch endpoint, but it is the identity used to execute the endpoint and hence create a batch job. Both user assigned and system assigned identities can be use in this scenario.
+ # [Azure ML CLI](#tab/cli)
-On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag.
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
```bash az login --identity ```
-For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+Once authenticated, use the following command to run a batch deployment job:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+```
# [Azure ML SDK for Python](#tab/sdk)
Once authenticated, use the following command to run a batch deployment job:
```python job = ml_client.batch_endpoints.invoke( endpoint_name,
- input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
) ```
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
Use the following steps to deploy an MLflow model with a custom scoring script.
return results ```
-1. Let's create an environment where the scoring script can be executed:
+1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](../concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
+
+ > [!TIP]
+ > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure ML studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
+
+ > [!IMPORTANT]
+ > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
# [Azure ML CLI](#tab/cli)
Use the following steps to deploy an MLflow model with a custom scoring script.
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", ) ```-
+
1. Let's create the deployment now: # [Azure ML CLI](#tab/cli)
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
A deployment is a set of resources required for hosting the model that does the
1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2. 1. Click on __Next__.
- :::image type="content" source="../media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
-
- 1. Complete the wizard.
- 1. Create the deployment: # [Azure ML CLI](#tab/cli)
A deployment is a set of resources required for hosting the model that does the
In the wizard, click on __Create__ to start the deployment process.
- :::image type="content" source="../media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
1. Check batch endpoint and deployment details.
The scoring results in Storage Explorer are similar to the following sample page
Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+In this example, you will learn how to add a second deployment __that solves the same MNIST problem but using a model built with Keras and TensorFlow__.
+ ### Adding a second deployment
-1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
# [Azure ML CLI](#tab/cli)
Once you have a batch endpoint with a deployment, you can continue to refine you
1. Enter the name of the environment, in this case `keras-batch-env`. 1. On __Select environment type__ select __Use existing docker image with conda__. 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
- 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+ 1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras/environment/conda.yml":::
1. Click on __Next__ and then on __Create__. 1. The environment is ready to be used.
Once you have a batch endpoint with a deployment, you can continue to refine you
> [!IMPORTANT] > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor. -
-1. Create a deployment definition
+1. Create a scoring script for the model:
+
+ __batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist-keras/code/batch_driver.py" :::
+
+3. Create a deployment definition
# [Azure ML CLI](#tab/cli)
Once you have a batch endpoint with a deployment, you can continue to refine you
endpoint_name=batch_endpoint_name, model=model, code_path="./mnist-keras/code/",
- scoring_script="digit_identification.py",
+ scoring_script="batch_driver.py",
environment=env, compute=compute_name, instance_count=2,
Although you can invoke a specific deployment inside of an endpoint, you will us
# [Azure ML CLI](#tab/cli)
-```bash
-az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
-```
# [Azure ML SDK for Python](#tab/sdk)
+```python
+endpoint = ml_client.batch_endpoints.get(endpoint_name)
+endpoint.defaults.deployment_name = deployment.name
+ml_client.batch_endpoints.begin_create_or_update(endpoint)
+```
# [studio](#tab/studio)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Previously updated : 10/06/2022 Last updated : 11/01/2022
The main example in this doc uses managed online endpoints for deployment. To us
* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+# [ARM template](#tab/arm)
+
+> [!NOTE]
+> While the Azure CLI and CLI extension for machine learning are used in these steps, they are not the main focus. They are used more as utilities, passing templates to Azure and checking the status of template deployments.
++
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
+ ```azurecli
+ az account set --subscription <subscription ID>
+ az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
+
+> [!IMPORTANT]
+> The examples in this document assume that you are using the Bash shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+ ## Prepare your system
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
) ```
+# [ARM template](#tab/arm)
+
+### Clone the sample repository
+
+To follow along with this article, first clone the [samples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the samples directory:
+
+```azurecli
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+### Set an endpoint name
+
+To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
+
+For Unix, run this command:
++
+> [!NOTE]
+> Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
+
+Also set the following environment variables, as they are used in the examples in this article. Replace the values with your Azure subscription ID, the Azure region where your workspace is located, the resource group that contains the workspace, and the workspace name:
+
+```bash
+export SUBSCRIPTION_ID="your Azure subscription ID"
+export LOCATION="Azure region where your workspace is located"
+export RESOURCE_GROUP="Azure resource group that contains your workspace"
+export WORKSPACE="Azure Machine Learning workspace name"
+```
+
+A couple of the template examples require you to upload files to the Azure Blob store for your workspace. The following steps will query the workspace and store this information in environment variables used in the examples:
+
+1. Get an access token:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="get_access_token":::
+
+1. Set the REST API version:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="api_version":::
+
+1. Get the storage information:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="get_storage_details":::
+ ## Define the endpoint and deployment
In this article, we first define names of online endpoint and deployment for deb
) ```
+# [ARM template](#tab/arm)
+
+The Azure Resource Manager templates [online-endpoint.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint.json) and [online-endpoint-deployment.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint-deployment.json) are used by the steps in this article.
+ ### Register your model and environment separately
For more information on registering your model as an asset, see [Register your m
For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment)
+# [ARM template](#tab/arm)
+
+1. To register the model using a template, you must first upload the model file to an Azure Blob store. The following example uses the `az storage blob upload-batch` command to upload a file to the default storage for your workspace:
+
+ :::code language="{language}" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="upload_model":::
+
+1. After uploading the file, use the template to create a model registration. In the following example, the `modelUri` parameter contains the path to the model:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_model":::
+
+1. Part of the environment is a conda file that specifies the model dependencies needed to host the model. The following example demonstrates how to read the contents of the conda file into an environment variables:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="read_condafile":::
+
+1. The following example demonstrates how to use the template to register the environment. The contents of the conda file from the previous step are passed to the template using the `condaFile` parameter:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_environment":::
+ ### Use different CPU and GPU instance types
As noted earlier, the script specified in `code_configuration.scoring_script` mu
# [Python](#tab/python) As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/model-1/onlinescoring/score.py).
+# [ARM template](#tab/arm)
+
+As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
+
+When using a template for deployment, you must first upload the scoring file(s) to an Azure Blob store and then register it:
+
+1. The following example uses the Azure CLI command `az storage blob upload-batch` to upload the scoring file(s):
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="upload_code":::
+
+1. The following example demonstrates hwo to register the code using a template:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_code":::
+ The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
First create an endpoint. Optionally, for a local endpoint, you can skip this st
ml_client.online_endpoints.begin_create_or_update(endpoint, local=True) ```
+# [ARM template](#tab/arm)
+
+The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ Now, create a deployment named `blue` under the endpoint.
ml_client.online_deployments.begin_create_or_update(
The `local=True` flag directs the SDK to deploy the endpoint in the Docker environment.
+# [ARM template](#tab/arm)
+
+The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ > [!TIP]
The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azur
ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'local-10061534497697', 'description': 'this is a sample local endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None}) ```
+# [ARM template](#tab/arm)
+
+The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ The following table contains the possible values for `provisioning_state`:
endpoint = ml_client.online_endpoints.get(endpoint_name)
scoring_uri = endpoint.scoring_uri ```
+# [ARM template](#tab/arm)
+
+The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ ### Review the logs for output from the invoke operation
ml_client.online_deployments.get_logs(
) ```
+# [ARM template](#tab/arm)
+
+The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ ## Deploy your online endpoint to Azure
This deployment might take up to 15 minutes, depending on whether the underlying
ml_client.online_endpoints.begin_create_or_update(endpoint) ```
+# [ARM template](#tab/arm)
+
+1. The following example demonstrates using the template to create an online endpoint:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_endpoint":::
+
+1. After the endpoint has been created, the following example demonstrates how to deploy the model to the endpoint:
+
+ :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_deployment":::
+ > [!TIP]
for endpoint in ml_client.online_endpoints.list():
print(f"{endpoint.kind}\t{endpoint.location}\t{endpoint.name}") ```
+# [ARM template](#tab/arm)
+
+The `show` command contains information in `provisioning_status` for endpoint and deployment:
++
+You can list all the endpoints in the workspace in a table format by using the `list` command:
+
+```azurecli
+az ml online-endpoint list --output table
+```
+ ### Check the status of the online deployment
ml_client.online_deployments.get_logs(
name="blue", endpoint_name=online_endpoint_name, lines=50, container_type="storage-initializer" ) ```+
+# [ARM template](#tab/arm)
++
+By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
+ For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
ml_client.online_endpoints.invoke(
) ```
+# [ARM template](#tab/arm)
+
+You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data:
++
+The following example shows how to get the key used to authenticate to the endpoint:
+
+> [!TIP]
+> You can control which Azure Active Directory security principals can get the authentication key by assigning them to a custom role that allows `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
++ ### (Optional) Update the deployment
To understand how `begin_create_or_update` works:
The `begin_create_or_update` method also works with local deployments. Use the same method with the `local=True` flag.
+# [ARM template](#tab/arm)
+
+There currently is not an option to update the deployment using an ARM template.
+ > [!Note]
If you aren't going use the deployment, you should delete it by running the foll
ml_client.online_endpoints.begin_delete(name=online_endpoint_name) ```
+# [ARM template](#tab/arm)
++ ## Next steps Try safe rollout of your models as a next step:-- [Safe rollout for online endpoints (CLI v2)](how-to-safely-rollout-managed-endpoints.md)-- [Safe rollout for online endpoints (SDK v2)](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
To learn more, review these articles:
machine-learning How To Safely Rollout Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md
+
+ Title: Safe rollout for online endpoints
+
+description: Roll out newer versions of ML models without disruption.
++++++ Last updated : 10/27/2022++++
+# Safe rollout for online endpoints
+++
+In this article, you'll learn how to deploy a new version of a machine learning model in production without causing any disruption. You'll use blue-green deployment, also known as a safe rollout strategy, to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
+
+This article assumes you're using online endpoints, that is, endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and the differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+
+> [!Note]
+> The main example in this article uses managed online endpoints for deployment. To use Kubernetes endpoints instead, see the notes in this document inline with the managed online endpoints discussion.
+
+In this article, you'll learn to:
+
+> [!div class="checklist"]
+> * Define an online endpoint and a deployment called "blue" to serve version 1 of a model
+> * Scale the blue deployment so that it can handle more requests
+> * Deploy version 2 of the model (called the "green" deployment) to the endpoint, but send the deployment no live traffic
+> * Test the green deployment in isolation
+> * Mirror a percentage of live traffic to the green deployment to validate it (preview)
+> * Send a small percentage of live traffic to the green deployment
+> * Send over all live traffic to the green deployment
+> * Delete the now-unused v1 blue deployment
+
+## Prerequisites
+
+# [Azure CLI](#tab/azure-cli)
++
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
+ ```azurecli
+ az account set --subscription <subscription id>
+ az configure --defaults workspace=<azureml workspace name> group=<resource group>
+ ```
+
+* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+
+# [Python](#tab/python)
+++
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+++
+## Prepare your system
+
+# [Azure CLI](#tab/azure-cli)
+
+### Clone the examples repository
+
+To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, go to the repository's `cli/` directory:
+
+```azurecli
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples
+cd cli
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository. This reduces the time to complete the operation.
+
+The commands in this tutorial are in the file `deploy-safe-rollout-online-endpoints.sh` in the `cli` directory, and the YAML configuration files are in the `endpoints/online/managed/sample/` subdirectory.
+
+> [!NOTE]
+> The YAML configuration files for Kubernetes online endpoints are in the `endpoints/online/kubernetes/` subdirectory.
+
+# [Python](#tab/python)
+
+### Clone the examples repository
+
+To run the training examples, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, go into the `azureml-examples/sdk/python/endpoints/online/managed` directory:
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples/sdk/python/endpoints/online/managed
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository. This reduces the time to complete the operation.
+
+The information in this article is based on the [online-endpoints-safe-rollout.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb) notebook. It contains the same content as this article, although the order of the codes is slightly different.
+
+> [!NOTE]
+> The steps for the Kubernetes online endpoint are based on the [kubernetes-online-endpoints-safe-rollout.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/kubernetes/kubernetes-online-endpoints-safe-rollout.ipynb) notebook.
+
+### Connect to Azure Machine Learning workspace
+
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace where you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=import_libraries)]
+
+ > [!NOTE]
+ > If you're using the Kubernetes online endpoint, import the `KubernetesOnlineEndpoint` and `KubernetesOnlineDeployment` class from the `azure.ai.ml.entities` library.
+
+1. Configure workspace details and get a handle to the workspace:
+
+ To connect to a workspace, we need identifier parametersΓÇöa subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=workspace_details)]
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=workspace_handle)]
+++
+## Define the endpoint and deployment
+
+Online endpoints are used for online (real-time) inferencing. Online endpoints contain deployments that are ready to receive data from clients and can send responses back in real time.
+
+# [Azure CLI](#tab/azure-cli)
+
+### Create online endpoint
+
+To create an online endpoint:
+
+1. Set your endpoint name:
+
+ For Unix, run this command (replace `YOUR_ENDPOINT_NAME` with a unique name):
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="set_endpoint_name":::
+
+ > [!IMPORTANT]
+ > Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
+
+1. Create the endpoint in the cloud, run the following code:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="create_endpoint":::
+
+### Create the 'blue' deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a deployment named `blue` for your endpoint, run the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="create_blue":::
+
+# [Python](#tab/python)
+
+### Create online endpoint
+
+To create a managed online endpoint, use the `ManagedOnlineEndpoint` class. This class allows users to configure the following key aspects of the endpoint:
+
+* `name` - Name of the endpoint. Needs to be unique at the Azure region level
+* `auth_mode` - The authentication method for the endpoint. Key-based authentication and Azure ML token-based authentication are supported. Key-based authentication doesn't expire but Azure ML token-based authentication does. Possible values are `key` or `aml_token`.
+* `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
+ * `type`- The type of managed identity. Azure Machine Learning supports `system_assigned` or `user_assigned` identity.
+ * `user_assigned_identities` - List (array) of fully qualified resource IDs of the user-assigned identities. This property is required if `identity.type` is user_assigned.
+* `description`- Description of the endpoint.
+
+1. Configure the endpoint:
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=configure_endpoint)]
+
+ > [!NOTE]
+ > To create a Kubernetes online endpoint, use the `KubernetesOnlineEndpoint` class.
+
+1. Create the endpoint:
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=create_endpoint)]
+
+### Create the 'blue' deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a deployment for your managed online endpoint, use the `ManagedOnlineDeployment` class. This class allows users to configure the following key aspects of the deployment:
+
+**Key aspects of deployment**
+* `name` - Name of the deployment.
+* `endpoint_name` - Name of the endpoint to create the deployment under.
+* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+* `code_configuration` - the configuration for the source code and scoring script
+ * `path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+* `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+* `instance_count` - The number of instances to use for the deployment
+
+1. Configure blue deployment:
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=configure_deployment)]
+
+ > [!NOTE]
+ > To create a deployment for a Kubernetes online endpoint, use the `KubernetesOnlineDeployment` class.
+
+1. Create the deployment:
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=create_deployment)]
+
+ [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=deployment_traffic)]
+++
+## Confirm your existing deployment
+
+# [Azure CLI](#tab/azure-cli)
+
+You can view the status of your existing endpoint and deployment by running:
+
+```azurecli
+az ml online-endpoint show --name $ENDPOINT_NAME
+
+az ml online-deployment show --name blue --endpoint $ENDPOINT_NAME
+```
+
+You should see the endpoint identified by `$ENDPOINT_NAME` and, a deployment called `blue`.
+
+### Test the endpoint with sample data
+
+The endpoint can be invoked using the `invoke` command. We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/model-1/sample-request.json) file.
++
+# [Python](#tab/python)
+
+Check the status to see whether the model was deployed without error:
+
+```python
+ml_client.online_endpoints.get(name=online_endpoint_name)
+```
+
+### Test the endpoint with sample data
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `endpoint_name` - Name of the endpoint
+* `request_file` - File with request data
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/model-1/sample-request.json) file.
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=test_deployment)]
+++
+## Scale your existing deployment to handle more traffic
+
+# [Azure CLI](#tab/azure-cli)
+
+In the deployment described in [Deploy and score a machine learning model with an online endpoint](how-to-deploy-managed-online-endpoints.md), you set the `instance_count` to the value `1` in the deployment yaml file. You can scale out using the `update` command:
++
+> [!Note]
+> Notice that in the above command we use `--set` to override the deployment configuration. Alternatively you can update the yaml file and pass it as an input to the `update` command using the `--file` input.
+
+# [Python](#tab/python)
+
+Using the `MLClient` created earlier, we'll get a handle to the deployment. The deployment can be scaled by increasing or decreasing the `instance_count`.
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=scale_deployment)]
+
+### Get endpoint details
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=get_endpoint_details)]
+++
+## Deploy a new model, but send it no traffic yet
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a new deployment named `green`:
++
+Since we haven't explicitly allocated any traffic to `green`, it will have zero traffic allocated to it. You can verify that using the command:
++
+### Test the new deployment
+
+Though `green` has 0% of traffic allocated, you can invoke it directly by specifying the `--deployment` name:
++
+If you want to use a REST client to invoke the deployment directly without going through traffic rules, set the following HTTP header: `azureml-model-deployment: <deployment-name>`. The below code snippet uses `curl` to invoke the deployment directly. The code snippet should work in Unix/WSL environments:
++
+# [Python](#tab/python)
+
+Create a new deployment for your managed online endpoint and name the deployment `green`:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=configure_new_deployment)]
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=create_new_deployment)]
+
+> [!NOTE]
+> If you're creating a deployment for a Kubernetes online endpoint, use the `KubernetesOnlineDeployment` class and specify a [Kubernetes instance type](how-to-manage-kubernetes-instance-types.md) in your Kubernetes cluster.
+
+### Test the new deployment
+
+Though `green` has 0% of traffic allocated, you can still invoke the endpoint and deployment with the [json](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/model-2/sample-request.json) file.
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=test_new_deployment)]
+++
+## Test the deployment with mirrored traffic (preview)
+
+Once you've tested your `green` deployment, you can copy (or 'mirror') a percentage of the live traffic to it. Mirroring traffic doesn't change results returned to clients. Requests still flow 100% to the `blue` deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors.
+
+> [!WARNING]
+> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope).
+
+# [Azure CLI](#tab/azure-cli)
+
+The following command mirrors 10% of the traffic to the `green` deployment:
++
+You can test mirror traffic by invoking the endpoint several times:
+
+```azurecli
+for i in {1..20} ; do
+ az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/model-1/sample-request.json
+done
+```
+
+# [Python](#tab/python)
+
+The following command mirrors 10% of the traffic to the `green` deployment:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=new_deployment_traffic)]
+
+You can test mirror traffic by invoking the endpoint several times:
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=several_tests_to_mirror_traffic)]
+++
+Mirroring has the following limitations:
+* You can only mirror traffic to one deployment.
+* Mirror traffic isn't currently supported for Kubernetes online endpoints.
+* The maximum mirrored traffic you can configure is 50%. This limit is to reduce the impact on your endpoint bandwidth quota.
+
+Also note the following behavior:
+* A deployment can only be set to live or mirror traffic, not both.
+* You can send traffic directly to the mirror deployment by specifying the deployment set for mirror traffic.
+* You can send traffic directly to a live deployment by specifying the deployment set for live traffic, but in this case the traffic won't be mirrored to the mirror deployment. Mirror traffic is routed from traffic sent to endpoint without specifying the deployment.
++
+# [Azure CLI](#tab/azure-cli)
+You can confirm that the specific percentage of the traffic was sent to the `green` deployment by seeing the logs from the deployment:
+
+```azurecli
+az ml online-deployment get-logs --name blue --endpoint $ENDPOINT_NAME
+```
+
+After testing, you can set the mirror traffic to zero to disable mirroring:
++
+# [Python](#tab/python)
+You can confirm that the specific percentage of the traffic was sent to the `green` deployment by seeing the logs from the deployment:
+
+```python
+ml_client.online_deployments.get_logs(
+ name="green", endpoint_name=online_endpoint_name, lines=50
+)
+```
+
+After testing, you can set the mirror traffic to zero to disable mirroring:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=disable_traffic_mirroring)]
+++
+## Test the new deployment with a small percentage of live traffic
+
+# [Azure CLI](#tab/azure-cli)
+
+Once you've tested your `green` deployment, allocate a small percentage of traffic to it:
++
+# [Python](#tab/python)
+
+Once you've tested your `green` deployment, allocate a small percentage of traffic to it:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=allocate_some_traffic)]
+++
+Now, your `green` deployment will receive 10% of requests.
++
+## Send all traffic to your new deployment
+
+# [Azure CLI](#tab/azure-cli)
+
+Once you're fully satisfied with your `green` deployment, switch all traffic to it.
++
+# [Python](#tab/python)
+
+Once you're fully satisfied with your `green` deployment, switch all traffic to it.
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=allocate_all_traffic)]
+++
+## Remove the old deployment
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=remove_old_deployment)]
+++
+## Delete the endpoint and deployment
+
+# [Azure CLI](#tab/azure-cli)
+
+If you aren't going use the deployment, you should delete it with:
++
+# [Python](#tab/python)
+
+If you aren't going use the deployment, you should delete it with:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=delete_endpoint)]
+++
+## Next steps
+- [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Monitor managed online endpoints](how-to-monitor-online-endpoints.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md)
+- [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
Title: Support for VMware migration in Azure Migrate
-description: Learn about support for VMware VM migration in Azure Migrate.
+ Title: Support for VMware vSphere migration in Azure Migrate
+description: Learn about support for VMware vSphere VM migration in Azure Migrate.
ms. Previously updated : 06/08/2020 Last updated : 10/04/2022
-# Support matrix for VMware migration
+# Support matrix for VMware vSphere migration
-This article summarizes support settings and limitations for migrating VMware VMs with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing VMware VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-vmware.md).
+This article summarizes support settings and limitations for migrating VMware vSphere VMs with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing VMware vSphere VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-vmware.md).
## Migration options
-You can migrate VMware VMs in a couple of ways:
+You can migrate VMware vSphere VMs in a couple of ways:
- **Using agentless migration**: Migrate VMs without needing to install anything on them. You deploy the [Azure Migrate appliance](migrate-appliance.md) for agentless migration. - **Using agent-based migration**: Install an agent on the VM for replication. For agent-based migration, you deploy a [replication appliance](migrate-replication-appliance.md).
Review [this article](server-migrate-overview.md) to figure out which method you
## Agentless migration
-This section summarizes requirements for agentless VMware VM migration to Azure.
+This section summarizes requirements for agentless VMware vSphere VM migration to Azure.
-### VMware requirements (agentless)
+### VMware vSphere requirements (agentless)
-The table summarizes VMware hypervisor requirements.
+The table summarizes VMware vSphere hypervisor requirements.
**VMware** | **Details** | **VMware vCenter Server** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
-**VMware vSphere ESXI host** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
+**VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
**vCenter Server permissions** | Agentless migration uses the [Migrate Appliance](migrate-appliance.md). The appliance needs these permissions in vCenter Server:<br/><br/> - **Datastore.Browse** (Datastore -> Browse datastore): Allow browsing of VM log files to troubleshoot snapshot creation and deletion.<br/><br/> - **Datastore.FileManagement** (Datastore -> Low level file operations): Allow read/write/delete/rename operations in the datastore browser, to troubleshoot snapshot creation and deletion.<br/><br/> - **VirtualMachine.Config.ChangeTracking** (Virtual machine -> Disk change tracking): Allow enable or disable change tracking of VM disks, to pull changed blocks of data between snapshots.<br/><br/> - **VirtualMachine.Config.DiskLease** (Virtual machine -> Disk lease): Allow disk lease operations for a VM, to read the disk using the VMware vSphere Virtual Disk Development Kit (VDDK).<br/><br/> - **VirtualMachine.Provisioning.DiskRandomRead** (Virtual machine -> Provisioning -> Allow read-only disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.DiskRandomAccess** (Virtual machine -> Provisioning -> Allow disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.GetVmFiles** (Virtual machine -> Provisioning -> Allow virtual machine download): Allows read operations on files associated with a VM, to download the logs and troubleshoot if failure occurs.<br/><br/> - **VirtualMachine.State.\*** (Virtual machine -> Snapshot management): Allow creation and management of VM snapshots for replication.<br/><br/> - **VirtualMachine.Interact.PowerOff** (Virtual machine -> Interaction -> Power off): Allow the VM to be powered off during migration to Azure. **Multiple vCenter Servers** | A single appliance can connect to up to 10 vCenter Servers. ### VM requirements (agentless)
-The table summarizes agentless migration requirements for VMware VMs.
+The table summarizes agentless migration requirements for VMware vSphere VMs.
**Support** | **Details** |
The table summarizes agentless migration requirements for VMware VMs.
### Appliance requirements (agentless)
-Agentless migration uses the [Azure Migrate appliance](migrate-appliance.md). You can deploy the appliance as a VMware VM using an OVA template, imported into vCenter Server, or using a [PowerShell script](deploy-appliance-script.md).
+Agentless migration uses the [Azure Migrate appliance](migrate-appliance.md). You can deploy the appliance as a VMware vSphere VM using an OVA template, imported into vCenter Server, or using a [PowerShell script](deploy-appliance-script.md).
-- Learn about [appliance requirements](migrate-appliance.md#appliancevmware) for VMware.
+- Learn about [appliance requirements](migrate-appliance.md#appliancevmware) for VMware vSphere.
- Learn about URLs that the appliance needs to access in [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds. - In Azure Government, you must deploy the appliance [using the script](deploy-appliance-script-government.md).
Agentless migration uses the [Azure Migrate appliance](migrate-appliance.md). Yo
**Device** | **Connection** | Appliance | Outbound connections on port 443 to upload replicated data to Azure, and to communicate with Azure Migrate services orchestrating replication and migration.
-vCenter server | Inbound connections on port 443 to allow the appliance to orchestrate replication - create snapshots, copy data, release snapshots.
-vSphere/ESXI host | Inbound on TCP port 902 for the appliance to replicate data from snapshots. Outbound port 902 from ESXi host.
+vCenter Server | Inbound connections on port 443 to allow the appliance to orchestrate replication - create snapshots, copy data, release snapshots.
+vSphere ESXi host | Inbound on TCP port 902 for the appliance to replicate data from snapshots. Outbound port 902 from ESXi host.
## Agent-based migration
vSphere/ESXI host | Inbound on TCP port 902 for the appliance to replicate data
This section summarizes requirements for agent-based migration.
-### VMware requirements (agent-based)
+### VMware vSphere requirements (agent-based)
-This table summarizes assessment support and limitations for VMware virtualization servers.
+This table summarizes assessment support and limitations for VMware vSphere virtualization servers.
-**VMware requirements** | **Details**
+**VMware vSphere requirements** | **Details**
| **VMware vCenter Server** | Version 5.5, 6.0, 6.5, or 6.7.
-**VMware vSphere ESXI host** | Version 5.5, 6.0, 6.5, 6.7 or 7.0.
+**VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7 or 7.0.
**vCenter Server permissions** | A read-only account for vCenter Server. ### VM requirements (agent-based)
-The table summarizes VMware VM support for VMware VMs you want to migrate using agent-based migration.
+The table summarizes VMware vSphere VM support for VMware vSphere VMs you want to migrate using agent-based migration.
**Support** | **Details** |
The table summarizes VMware VM support for VMware VMs you want to migrate using
When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements. -- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements) for VMware.
+- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements) for VMware vSphere.
- MySQL must be installed on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation). - Learn about URLs that the replication appliance needs to access in [public](migrate-replication-appliance.md#url-access) and [government](migrate-replication-appliance.md#azure-government-url-access) clouds. - Review the [ports](migrate-replication-appliance.md#port-access) the replication appliance needs to access.
Connect after migration-Linux | To connect to Azure VMs after migration using SS
## Next steps
-[Select](server-migrate-overview.md) a VMware migration option.
+[Select](server-migrate-overview.md) a VMware vSphere migration option.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
ms. Previously updated : 06/08/2020 Last updated : 10/04/2022 # Prepare on-premises machines for migration to Azure
This article describes how to prepare on-premises machines before you migrate th
In this article, you: > [!div class="checklist"] > * Review migration limitations.
-> * Select a method for migrating VMware VMs
+> * Select a method for migrating VMware vSphere VMs
> * Check hypervisor and operating system requirements for machines you want to migrate. > * Review URL and port access for machines you want to migrate. > * Review changes you might need to make before you begin migration.
The table summarizes discovery, assessment, and migration limits for Azure Migra
**Scenario** | **Project** | **Discovery/Assessment** | **Migration** | | |
-**VMware VMs** | Discover and assess up to 35,000 VMs in a single Azure Migrate project. | Discover up to 10,000 VMware VMs with a single [Azure Migrate appliance](common-questions-appliance.md) for VMware. <br> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance. | **Agentless migration**: you can simultaneously replicate a maximum of 500 VMs across multiple vCenter Servers (discovered from one appliance) using a scale-out appliance.<br> **Agent-based migration**: you can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of VMs.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
+**VMware vSphere VMs** | Discover and assess up to 35,000 VMs in a single Azure Migrate project. | Discover up to 10,000 VMware vSphere VMs with a single [Azure Migrate appliance](common-questions-appliance.md) for VMware vSphere. <br> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance. | **Agentless migration**: you can simultaneously replicate a maximum of 500 VMs across multiple vCenter Servers (discovered from one appliance) using a scale-out appliance.<br> **Agent-based migration**: you can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of VMs.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
**Hyper-V VMs** | Discover and assess up to 35,000 VMs in a single Azure Migrate project. | Discover up to 5,000 Hyper-V VMs with a single Azure Migrate appliance | An appliance isn't used for Hyper-V migration. Instead, the Hyper-V Replication Provider runs on each Hyper-V host.<br/><br/> Replication capacity is influenced by performance factors such as VM churn, and upload bandwidth for replication data.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10. **Physical machines** | Discover and assess up to 35,000 machines in a single Azure Migrate project. | Discover up to 250 physical servers with a single Azure Migrate appliance for physical servers. | You can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of servers.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
-## Select a VMware migration method
+## Select a VMware vSphere migration method
-If you're migrating VMware VMs to Azure, [compare](server-migrate-overview.md#compare-migration-methods) the agentless and agent-based migration methods, to decide what works for you.
+If you're migrating VMware vSphere VMs to Azure, [compare](server-migrate-overview.md#compare-migration-methods) the agentless and agent-based migration methods, to decide what works for you.
## Verify hypervisor requirements -- Verify [VMware agentless](migrate-support-matrix-vmware-migration.md#vmware-requirements-agentless), or [VMware agent-based](migrate-support-matrix-vmware-migration.md#vmware-requirements-agent-based) requirements.
+- Verify [VMware agentless](migrate-support-matrix-vmware-migration.md#vmware-vsphere-requirements-agentless), or [VMware vSphere agent-based](migrate-support-matrix-vmware-migration.md#vmware-vsphere-requirements-agent-based) requirements.
- Verify [Hyper-V host](migrate-support-matrix-hyper-v-migration.md#hyper-v-host-requirements) requirements.
If you're migrating VMware VMs to Azure, [compare](server-migrate-overview.md#co
Verify supported operating systems for migration: -- If you're migrating VMware VMs or Hyper-V VMs, verify VMware VM requirements for [agentless](migrate-support-matrix-vmware-migration.md#vm-requirements-agentless), and [agent-based](migrate-support-matrix-vmware-migration.md#vm-requirements-agent-based) migration, and requirements for [Hyper-V VMs](migrate-support-matrix-hyper-v-migration.md#hyper-v-vms).
+- If you're migrating VMware vSphere VMs or Hyper-V VMs, verify VMware vSphere VM requirements for [agentless](migrate-support-matrix-vmware-migration.md#vm-requirements-agentless), and [agent-based](migrate-support-matrix-vmware-migration.md#vm-requirements-agent-based) migration, and requirements for [Hyper-V VMs](migrate-support-matrix-hyper-v-migration.md#hyper-v-vms).
- Verify [Windows operating systems](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) are supported in Azure. - Verify [Linux distributions](../virtual-machines/linux/endorsed-distros.md) supported in Azure.
Review which URLs and ports are accessed during migration.
**Scenario** | **Details** | **URLs** | **Ports** | | |
-**VMware agentless migration** | Uses the [Azure Migrate appliance](migrate-appliance-architecture.md) for migration. Nothing is installed on VMware VMs. | Review the public cloud and government [URLs](migrate-appliance.md#url-access) needed for discovery, assessment, and migration with the appliance. | [Review](migrate-support-matrix-vmware-migration.md#port-requirements-agentless) the port requirements for agentless migration.
-**VMware agent-based migration** | Uses the [replication appliance](migrate-replication-appliance.md) for migration. The Mobility service agent is installed on VMs. | Review the [public cloud](migrate-replication-appliance.md#url-access) and [Azure Government](migrate-replication-appliance.md#azure-government-url-access) URLs that the replication appliance needs to access. | [Review](migrate-replication-appliance.md#port-access) the ports used during agent-based migration.
+**VMware vSphere agentless migration** | Uses the [Azure Migrate appliance](migrate-appliance-architecture.md) for migration. Nothing is installed on VMware vSphere VMs. | Review the public cloud and government [URLs](migrate-appliance.md#url-access) needed for discovery, assessment, and migration with the appliance. | [Review](migrate-support-matrix-vmware-migration.md#port-requirements-agentless) the port requirements for agentless migration.
+**VMware vSphere agent-based migration** | Uses the [replication appliance](migrate-replication-appliance.md) for migration. The Mobility service agent is installed on VMs. | Review the [public cloud](migrate-replication-appliance.md#url-access) and [Azure Government](migrate-replication-appliance.md#azure-government-url-access) URLs that the replication appliance needs to access. | [Review](migrate-replication-appliance.md#port-access) the ports used during agent-based migration.
**Hyper-V migration** | Uses a Provider installed on Hyper-V hosts for migration. Nothing is installed on Hyper-V VMs. | Review the [public cloud](migrate-support-matrix-hyper-v-migration.md#url-access-public-cloud) and [Azure Government](migrate-support-matrix-hyper-v-migration.md#url-access-azure-government) URLs that the Replication Provider running on the hosts needs to access. | The Replication Provider on the Hyper-V host uses outbound connections on HTTPS port 443 to send VM replication data. **Physical machines** | Uses the [replication appliance](migrate-replication-appliance.md) for migration. The Mobility service agent is installed on the physical machines. | Review the [public cloud](migrate-replication-appliance.md#url-access) and [Azure Government](migrate-replication-appliance.md#azure-government-url-access) URLs that the replication appliance needs to access. | [Review](migrate-replication-appliance.md#port-access) the ports used during physical migration.
Review the tables to identify the changes you need to make.
Changes performed are summarized in the table.
-**Action** | **VMware (agentless migration)** | **VMware (agent-based)/physical machines** | **Windows on Hyper-V**
+**Action** | **VMware vSphere (agentless migration)** | **VMware vSphere (agent-based)/physical machines** | **Windows on Hyper-V**
| | | **Configure the SAN policy as Online All**<br/><br/> | Set automatically for machines running Windows Server 2008 R2 or later.<br/><br/> Configure manually for earlier operating systems. | Set automatically in most cases. | Set automatically for machines running Windows Server 2008 R2 or later. **Install Hyper-V Guest Integration** | [Install manually](prepare-windows-server-2003-migration.md#install-on-vmware-vms) on machines running Windows Server 2003. | [Install manually](prepare-windows-server-2003-migration.md#install-on-vmware-vms) on machines running Windows Server 2003. | [Install manually](prepare-windows-server-2003-migration.md#install-on-hyper-v-vms) on machines running Windows Server 2003.
Changes performed are summarized in the table.
**Install the Windows Azure Guest Agent** <br/><br/> The Virtual Machine Agent (VM Agent) is a secure, lightweight process that manages virtual machine (VM) interaction with the Azure Fabric Controller. The VM Agent has a primary role in enabling and executing Azure virtual machine extensions that enable post-deployment configuration of VM, such as installing and configuring software. | Set automatically for machines running Windows Server 2008 R2 or later. <br/> Configure manually for earlier operating systems. | Set automatically for machines running Windows Server 2008 R2 or later. | Set automatically for machines running Windows Server 2008 R2 or later. **Connect after migration**<br/><br/> To connect after migration, there are a number of steps to take before you migrate. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually.
-[Learn more](./prepare-for-agentless-migration.md#changes-performed-on-windows-servers) on the changes performed on Windows servers for agentless VMware migrations.
+[Learn more](./prepare-for-agentless-migration.md#changes-performed-on-windows-servers) on the changes performed on Windows servers for agentless VMware vSphere migrations.
#### Configure SAN policy
-By default, Azure VMs are assigned drive D to use as temporary storage.
+By default, Azure VMs are assigned drive D: to use as temporary storage.
- This drive assignment causes all other attached storage drive assignments to increment by one letter.-- For example, if your on-premises installation uses a data disk that is assigned to drive D for application installations, the assignment for this drive increments to drive E after you migrate the VM to Azure.
+- For example, if your on-premises installation uses a data disk that is assigned to drive D: for application installations, the assignment for this drive increments to drive E: after you migrate the VM to Azure.
- To prevent this automatic assignment, and to ensure that Azure assigns the next free drive letter to its temporary volume, set the storage area network (SAN) policy to **OnlineAll**: Configure this setting manually as follows:
For other versions, prepare machines as summarized in the table.
**Enable ssh** | Ensure ssh is enabled and the sshd service is set to start automatically on reboot.<br/><br/> Ensure that incoming ssh connection requests are not blocked by the OS firewall or scriptable rules.| Enable manually for all versions except those called out above. **Install the Linux Azure Guest Agent** | The Microsoft Azure Linux Agent (waagent) is a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller.| Enable manually for all versions except those called out above. <br> Follow instructions to [install the Linux Agent manually](../virtual-machines/extensions/agent-linux.md#installation) for other OS versions. Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent.
-[Learn more](./prepare-for-agentless-migration.md#changes-performed-on-linux-servers) on the changes performed on Linux servers for agentless VMware migrations.
+[Learn more](./prepare-for-agentless-migration.md#changes-performed-on-linux-servers) on the changes performed on Linux servers for agentless VMware vSphere migrations.
The following table summarizes the steps performed automatically for the operating systems listed above.
-| Action | Agent\-Based VMware Migration | Agentless VMware Migration | Agentless Hyper\-V Migration |
+| Action | Agent\-Based VMware vSphere Migration | Agentless VMware vSphere Migration | Agentless Hyper\-V Migration |
||-|-|| | Update kernel image with Hyper\-V Linux Integration Services. <br> (The LIS drivers should be present on the kernel.) | Yes | Yes | Yes | | Enable Azure Serial Console logging | Yes | Yes | Yes |
After migration, complete these steps on the Azure VMs that are created:
## Next steps
-Decide which method you want to use to [migrate VMware VMs](server-migrate-overview.md) to Azure, or begin migrating [Hyper-V VMs](tutorial-migrate-hyper-v.md) or [physical servers or virtualized or cloud VMs](tutorial-migrate-physical-virtual-machines.md).
+Decide which method you want to use to [migrate VMware vSphere VMs](server-migrate-overview.md) to Azure, or begin migrating [Hyper-V VMs](tutorial-migrate-hyper-v.md) or [physical servers or virtualized or cloud VMs](tutorial-migrate-physical-virtual-machines.md).
## See what's supported
-For VMware VMs, Server Migration supports [agentless or agent-based migration](server-migrate-overview.md).
+For VMware vSphere VMs, Server Migration supports [agentless or agent-based migration](server-migrate-overview.md).
-- **VMware VMs**: Verify [migration requirements and support](migrate-support-matrix-vmware-migration.md) for VMware VMs.
+- **VMware vSphere VMs**: Verify [migration requirements and support](migrate-support-matrix-vmware-migration.md) for VMware vSphere VMs.
- **Hyper-V VMs**: Verify [migration requirements and support](migrate-support-matrix-hyper-v-migration.md) for Hyper-V VMs. - **Physical machines**: Verify [migration requirements and support](migrate-support-matrix-physical-migration.md) for on-premises physical machines and other virtualized servers. ## Learn more -- [Prepare for VMware agentless migration with Azure Migrate.](./prepare-for-agentless-migration.md)
+- [Prepare for VMware vSphere agentless migration with Azure Migrate.](./prepare-for-agentless-migration.md)
migrate Set Discovery Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/set-discovery-scope.md
Title: Set the scope for discovery of servers on VMware with Azure Migrate
-description: Describes how to set the discovery scope for servers hosted on VMware assessment and migration with Azure Migrate.
+ Title: Set the scope for discovery of servers on VMware vSphere with Azure Migrate
+description: Describes how to set the discovery scope for servers hosted on VMware vSphere assessment and migration with Azure Migrate.
ms. Previously updated : 03/13/2021 Last updated : 10/04/2022
-# Set discovery scope for servers in VMware environment
+# Set discovery scope for servers in VMware vSphere environment
-This article describes how to limit the scope of discovery for servers in VMware environment when you are:
+This article describes how to limit the scope of discovery for servers in VMware vSphere environment when you are:
- Discovering servers with the [Azure Migrate appliance](migrate-appliance-architecture.md) when you're using the Azure Migrate: Discovery and assessment tool.-- Discovering servers with the [Azure Migrate appliance](migrate-appliance-architecture.md) when you're using the Azure Migrate:Server Migration tool, for agentless migration of servers from VMware environment to Azure.
+- Discovering servers with the [Azure Migrate appliance](migrate-appliance-architecture.md) when you're using the Azure Migrate:Server Migration tool, for agentless migration of servers from VMware vSphere environment to Azure.
When you set up the appliance, it connects to vCenter Server and starts discovery. Before you connect the appliance to vCenter Server, you can limit discovery to vCenter Server datacenters, clusters, a folder of clusters, hosts, a folder of hosts, or individual servers. To set the scope, you assign permissions on the account that the appliance uses to access the vCenter Server. ## Before you start
-If you haven't set up a vCenter user account that Azure Migrate uses for discovery, do that now for [assessment](./tutorial-discover-vmware.md#prepare-vmware) or [agentless migration](./migrate-support-matrix-vmware-migration.md#agentless-migration).
+If you haven't set up a vCenter Server user account that Azure Migrate uses for discovery, do that now for [assessment](./tutorial-discover-vmware.md#prepare-vmware) or [agentless migration](./migrate-support-matrix-vmware-migration.md#agentless-migration).
## Assign permissions and roles
-You can assign permissions on VMware inventory objects using one of two methods:
+You can assign permissions on VMware vSphere inventory objects using one of two methods:
- On the account used by the appliance, assign a role with the required permissions on the objects you want to scope.-- Alternatively, assign a role to the account at the datacenter level, and propagate to the child objects. Then give the account a **No access** role, for every object that you don't want in scope. We don't recommend this approach since it's cumbersome, and might expose access controls, because every new child object is automatically granted access inherited from the parent.
+- Alternatively, assign a role to the account at the data center level, and propagate to the child objects. Then give the account a **No access** role, for every object that you don't want in scope. We don't recommend this approach since it's cumbersome, and might expose access controls, because every new child object is automatically granted access inherited from the parent.
-You can't scope inventory discovery at the vCenter server folder level. If you need to scope discover to servers in a folder, create a user and grant access individually to each required server. Host and cluster folders are supported.
+You can't scope inventory discovery at the vCenter Server folder level. If you need to scope discover to servers in a folder, create a user and grant access individually to each required server. Host and cluster folders are supported.
### Assign a role for assessment
-1. On the appliance vCenter account you're using for discovery, apply the **Read-only** role for all parent objects that host servers you want to discover and assess (host, cluster, hosts folder, clusters folder, up to datacenter).
+1. On the appliance vCenter Server account you're using for discovery, apply the **Read-only** role for all parent objects that host servers you want to discover and assess (host, cluster, hosts folder, clusters folder, up to datacenter).
2. Propagate these permissions to child objects in the hierarchy. ![Assign permissions](./media/tutorial-assess-vmware/assign-perms.png) ### Assign a role for agentless migration
-1. On the appliance vCenter account you're using for migration, apply a user-defined role that has the [permissions needed](migrate-support-matrix-vmware-migration.md#vmware-requirements-agentless), to all parent objects that host servers you want to discover and migrate.
+1. On the appliance vCenter Server account you're using for migration, apply a user-defined role that has the [permissions needed](migrate-support-matrix-vmware-migration.md#vmware-vsphere-requirements-agentless), to all parent objects that host servers you want to discover and migrate.
2. You can name the role with something that's easier to identify. For example, <em>Azure_Migrate</em>. ## Work around for server folder restriction
-Currently, the Azure Migrate: Discovery and assessment tool can't discover servers if access is granted at the vCenter server folder level. If you do want to scope your discovery and assessment by server folders, use this workaround.
+Currently, the Azure Migrate: Discovery and assessment tool can't discover servers if access is granted at the vCenter Server folder level. If you do want to scope your discovery and assessment by server folders, use this workaround.
1. Assign read-only permissions on all servers located in the folders you want to scope for discovery and assessment.
-2. Grant read-only access to all the parent objects that host the servers host, cluster, hosts folder, clusters folder, up to datacenter). You don't need to propagate the permissions to all child objects.
+2. Grant read-only access to all the parent objects that host the servers host, cluster, hosts folder, clusters folder, up to data center). You don't need to propagate the permissions to all child objects.
3. To use the credentials for discovery, select the datacenter as **Collection Scope**.
The role-based access control setup ensures that the corresponding vCenter user
## Next steps
-[Set up the appliance](how-to-set-up-appliance-vmware.md)
+[Set up the appliance](how-to-set-up-appliance-vmware.md)
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Title: Migrate VMware VMs with agent-based Azure Migrate Server Migration
-description: Learn how to run an agent-based migration of VMware VMs with Azure Migrate.
+ Title: Migrate VMware vSphere VMs with agent-based Azure Migrate Server Migration
+description: Learn how to run an agent-based migration of VMware vSphere VMs with Azure Migrate.
ms. Previously updated : 06/20/2022 Last updated : 10/04/2022
-# Migrate VMware VMs to Azure (agent-based)
+# Migrate VMware vSphere VMs to Azure (agent-based)
-This article shows you how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware VMs using agentless migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
+This article shows you how to migrate on-premises VMware vSphere VMs to Azure, using the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware vSphere VMs using agentless migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
In this tutorial, you learn how to: > [!div class="checklist"] > * Prepare Azure to work with Azure Migrate.
-> * Prepare for agent-based migration. Set up a VMware account so that Azure Migrate can discover machines for migration. Set up an account so that the Mobility service agent can install on machines you want to migrate, and prepare a machine to act as the replication appliance.
+> * Prepare for agent-based migration. Set up a VMware vCenter Server account so that Azure Migrate can discover machines for migration. Set up an account so that the Mobility service agent can install on machines you want to migrate, and prepare a machine to act as the replication appliance.
> * Add the Azure Migrate: Server Migration tool > * Set up the replication appliance. > * Replicate VMs.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-Before you begin this tutorial, [review](./agent-based-migration-architecture.md) the VMware agent-based migration architecture.
+Before you begin this tutorial, [review](./agent-based-migration-architecture.md) the VMware vSphere agent-based migration architecture.
## Prepare Azure
Verify support requirements and permissions, and prepare to deploy a replicatio
### Prepare an account to discover VMs
-Azure Migrate: Server Migration needs access to VMware servers to discover VMs you want to migrate. Create the account as follows:
+Azure Migrate: Server Migration needs access to VMware vSphere to discover VMs you want to migrate. Create the account as follows:
-1. To use a dedicated account, create a role at the vCenter level. Give the role a name such as
+1. To use a dedicated account, create a role at the vCenter Server level. Give the role a name such as
**Azure_Migrate**. 2. Assign the role the permissions summarized in the table below.
-3. Create a user on the vCenter server or vSphere host. Assign the role to the user.
+3. Create a user on the vCenter Server or vSphere host. Assign the role to the user.
-#### VMware account permissions
+#### VMware vSphere account permissions
**Task** | **Role/Permissions** | **Details** | | **VM discovery** | At least a read-only user<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Read-only | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs, and networks).
-**Replication** | Create a role (Azure Site Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure Site Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs, and networks).
+**Replication** | Create a role (Azure Site Recovery) with the required permissions, and then assign the role to a VMware vSphere user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure Site Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs, and networks).
### Prepare an account for Mobility service installation
Prepare the account as follows:
### Prepare a machine for the replication appliance
-The appliance is used to replication machines to Azure. The appliance is single, highly available, on-premises VMware VM that hosts these components:
+The appliance is used to replication machines to Azure. The appliance is single, highly available, on-premises VMware vSphere VM that hosts these components:
- **Configuration server**: The configuration server coordinates communications between on-premises and Azure, and manages data replication. - **Process server**: The process server acts as a replication gateway. It receives replication data; optimizes it with caching, compression, and encryption, and sends it to a cache storage account in Azure. The process server also installs the Mobility Service agent on VMs you want to replicate, and performs automatic discovery of on-premises VMware VMs. Prepare for the appliance as follows: -- [Review appliance requirements](migrate-replication-appliance.md#appliance-requirements). Generally, you set up the replication appliance a VMware VM using a downloaded OVA file. The template creates an appliance that complies with all requirements.
+- [Review appliance requirements](migrate-replication-appliance.md#appliance-requirements). Generally, you set up the replication appliance a VMware vSphere VM using a downloaded OVA file. The template creates an appliance that complies with all requirements.
- MySQL must be installed on the appliance. [Review](migrate-replication-appliance.md#mysql-installation) installation methods. - Review the [public cloud URLs](migrate-replication-appliance.md#url-access), and [Azure Government URLs](migrate-replication-appliance.md#azure-government-url-access) that the appliance machine needs to access. - [Review the ports](migrate-replication-appliance.md#port-access) that the replication appliance machine needs to access.
-### Check VMware requirements
+### Check VMware vSphere requirements
-Make sure VMware servers and VMs comply with requirements for migration to Azure.
+Make sure VMware vSphere VMs comply with requirements for migration to Azure.
-1. [Verify](migrate-support-matrix-vmware-migration.md#vmware-requirements-agent-based) VMware server requirements.
+1. [Verify](migrate-support-matrix-vmware-migration.md#vmware-vsphere-requirements-agent-based) VMware vSphere VM requirements.
2. [Verify](migrate-support-matrix-vmware-migration.md#vm-requirements-agent-based) VM requirements for migration. 3. Verify Azure settings. On-premises VMs you replicate to Azure must comply with [Azure VM requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements). 4. There are some changes needed on VMs before you migrate them to Azure.
Download the template as follows:
10. Note the name of the resource group and the Recovery Services vault. You need these during appliance deployment.
-### Import the template in VMware
+### Import the template into VMware vSphere
-After downloading the OVF template, you import it into VMware to create the replication application on a VMware VM running Windows Server 2016.
+After downloading the OVF template, you import it into VMware vSphere to create the replication application on a VMware vSphere VM running Windows Server 2016.
-1. Sign in to the VMware vCenter server or vSphere ESXi host with the VMware vSphere Client.
+1. Sign in to the VMware vCenter Server or vSphere ESXi host with the VMware vSphere Client.
2. On the **File** menu, select **Deploy OVF Template** to start the **Deploy OVF Template Wizard**. 3. In **Select source**, enter the location of the downloaded OVF. 4. In **Review details**, select **Next**.
After you've verified that the test migration works as expected, you can migrate
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
Title: Enable Azure Network Watcher | Microsoft Docs
description: Learn how to enable Network Watcher. documentationcenter: na--+ na Last updated 05/11/2022-+ # Enable Azure Network Watcher
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
Title: Azure Traffic Analytics supported regions | Microsoft Docs
description: This article provides the list of Traffic Analytics supported regions. documentationcenter: na--+ na Last updated 05/11/2022-+ ms.custon: references_regions
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
Title: Usage scenarios of Azure Traffic Analytics | Microsoft Docs
description: This article describes the usage scenarios of Traffic Analytics. documentationcenter: na--+ na Last updated 05/11/2022-+ # Usage scenarios
private-5g-core Azure Private 5G Core Release Notes 2208 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2208.md
description: Discover what's new in the Azure Private 5G Core 2208 release
-+ Last updated 09/23/2022
private-5g-core Azure Private 5G Core Release Notes 2209 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2209.md
description: Discover what's new in the Azure Private 5G Core 2209 release
-+ Last updated 09/30/2022
private-5g-core Azure Private 5G Core Release Notes 2210 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2210.md
+
+ Title: Azure Private 5G Core 2210 release notes
+description: Discover what's new in the Azure Private 5G Core 2210 release
++++ Last updated : 11/01/2022++
+# Azure Private 5G Core 2210 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2210 release for the Azure Private 5G Core (AP5GC). The release notes are continuously updated, and critical issues requiring a workaround are added here as they're discovered. Before deploying this new version, carefully review the information contained in these release notes.
+
+This article applies to the AP5GC 2210 release (PMN-4-18-0). This release is compatible with the ASE Pro GPU running the ASE 2209 release and is supported by the 2022-04-01-preview [Microsoft.MobileNetwork API version](/rest/api/mobilenetwork).
+
+## Issues fixed in the AP5GC 2210 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | 4G/5G Signaling | Azure Private 5G Core will incorrectly accept SCTP connections on the wrong N2 IP address. This issue has been fixed in this release. |
+ | 2 | 4G/5G Signaling | In rare scenarios, due to a race condition triggered during a RAN disconnect/re-connect sequence, Azure Private 5G Core may fail to process incoming requests from the eNodeB or gNodeB. This issue has been fixed in this release. |
+ | 3 | 4G/5G Signaling | In rare scenarios, Azure Private 5G Core may corrupt the internal state of a packet data session, resulting in subsequent changes to that packet data session failing. This issue has been fixed in this release. |
+ | 4 | Packet forwarding | Azure Private 5G Core drops N3 data packets received from a gNodeB if they have specific flags set in the GTP-UPacket Header, resulting in the traffic from the user equipment (UE) never reaching the server on the N6 side. Specifically, the *Sequence Number* or *N-PDU* GTP-U header flags being set cause this issue. |
+ | 5 | Policy | In a specific scenario if the ASE 2209 release is reinstalled, the SIM and policy records from the first installation are retained on the ASE. This issue has been fixed in this release. |
+ | 6 | 4G/5G Signaling | In scenarios when the establishment of a PDU session has failed, Azure Private 5G Core may not automatically release the session, and the UE may need to re-register. This issue has been fixed in this release. |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Policy configuration | Azure Private 5G Core may ignore non-default quality of service (QoS) and policy configuration when handling 4G subscribers. | Not applicable. |
+ | 2 | Packet forwarding | Azure Private 5G Core may not forward buffered packets if NAT is enabled. | Not applicable. |
+ | 3 | 4G/5G Signaling | Azure Private 5G Core may perform an unnecessary PDU session resource setup transaction following a UE initiated service request. | Not applicable. |
+ | 4 | 4G/5G Signaling | In rare scenarios when a significant number of UEs are bulk registered and send continuous data, the core may incorrectly release data sessions. | If sessions are released, UEs may need to re-connect with the system to use data services. |
+ | 5 | Local dashboards | Azure Private 5G Core local dashboards may show incorrect values in some graphs (for example, session counts) after a power cycle of the Azure Stack Edge server. | Not applicable. |
+ | 6 | Local dashboards | The distributed tracing web GUI fails to display and decode some fields of 4G/5G NAS messages. Specifically, the *Request Type* and *DNN* information elements. | Messages will have to be viewed from separate packet capture if needed. |
+ | 7 | Performance | It has been observed very rarely that CPU allocation on an Azure Private 5G Packet Core deployment can result in some signaling processing workloads sharing a logical CPU core with data plane processing workloads, resulting in session creation failures or packet processing latency/failures at a moderate load. | Redeploying the Azure Private 5G Packet Core may resolve the problematic CPU allocation. |
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io </br> {region}.privatelink.azurecr.io | azurecr.io </br> {region}.azurecr.io | | Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io | | Azure Backup (Microsoft.RecoveryServices/vaults) / AzureBackup | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com |
-| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.{region}.siterecovery.windowsazure.com | {region}.siterecovery.windowsazure.com |
+| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.siterecovery.windowsazure.com | {region}.siterecovery.windowsazure.com |
| Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Service Bus (Microsoft.ServiceBus/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net |
For Azure services, use the recommended zone names as described in the following
| Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.us | search.windows.us | | Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us | | Azure Backup (Microsoft.RecoveryServices/vaults) / AzureBackup | privatelink.{region}.backup.windowsazure.us | {region}.backup.windowsazure.us |
-| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.{region}.siterecovery.windowsazure.us | {region}.siterecovery.windowsazure.us |
+| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {region}.siterecovery.windowsazure.us |
| Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net| | Azure Service Bus (Microsoft.ServiceBus/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net| servicebus.usgovcloudapi.net | | Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net |
purview Create A Scan Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-scan-rule-set.md
Previously updated : 09/27/2021 Last updated : 11/01/2022 # Create a scan rule set
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
Previously updated : 10/12/2022 Last updated : 10/31/2022 # Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to SQL Server on Azure Arc-enabled servers. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. These 3 actions are only supported for policies at server level. *Modify* is not supported at this point.
+This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to SQL Server on Azure Arc-enabled servers. The following actions are currently enabled: *Read*. This action is only supported for policies at server level. *Modify* is not supported at this point.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
Once your data source has the **Data Use Management** toggle *Enabled*, it will
## Create and publish a data owner policy
-Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to one of the examples shown in the images.
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example:
-**Example #1: SQL Performance Monitor policy**. This policy assigns the Azure AD principal 'Christie Cline' to the *SQL Performance monitoring* action, in the scope of Arc-enabled SQL server *DESKTOP-xxx*. This policy has also been published to that server. Note: Policies related to this action are not supported below server level.
-
-![Screenshot shows a sample data owner policy giving SQL Performance Monitor access to an Azure SQL Database.](./media/how-to-policies-data-owner-sql/data-owner-policy-example-arc-sql-server-performance-monitor.png)
-
-**Example #2: SQL Security Auditor policy**. Similar to example 1, but choose the *SQL Security auditing* action (instead of *SQL Performance monitoring*), when authoring the policy. Note: Policies related to this action are not supported below server level.
-
-**Example #3: Read policy**. This policy assigns the Azure AD principal 'sg-Finance' to the *SQL Data reader* action, in the scope of SQL server *DESKTOP-xxx*. This policy has also been published to that server. Note: Policies related to this action are not supported below server level.
+**Example: Read policy**. This policy assigns the Azure AD principal 'sg-Finance' to the *SQL Data reader* action, in the scope of SQL server *DESKTOP-xxx*. This policy has also been published to that server. Note that policies related to this action are not supported below server level.
![Screenshot shows a sample data owner policy giving Data Reader access to an Azure SQL Database.](./media/how-to-policies-data-owner-sql/data-owner-policy-example-arc-sql-server-data-reader.png)
This section contains a reference of how actions in Microsoft Purview data polic
||Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Rows| ||Microsoft.Sql/Sqlservers/Databases/Schemas/Views/Rows | |||
-| *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect |
-||Microsoft.Sql/sqlservers/databases/Connect |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabasePerformanceState/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/ServerPerformanceState/rows/select |
-|||
-| *SQL Security Auditor* |Microsoft.Sql/sqlservers/Connect |
-||Microsoft.Sql/sqlservers/databases/Connect |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
-|||
---- ## Next steps
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
The steps to publish a policy are as follows:
>[!Note] > After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same.
+## Unpublish a policy
+Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-publish-data-owner-policies)
+
+The steps to publish a policy are as follows:
+
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
+
+ :::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/policy-onboard-guide-2.png" alt-text="Screenshot showing data owner can access the Policy functionality in Microsoft Purview when it wants to update a policy by selecting Data policies.":::
+
+1. The Policy portal will present the list of existing policies in Microsoft Purview. Locate the policy that needs to be unpublished. Select the trash can icon.
+
+![Screenshot shows how to unpublish a data owner policy.](./media/how-to-policies-data-owner-authoring-generic/unpublish-policy.png)
+ ## Update or delete a policy Steps to update or delete a policy in Microsoft Purview are as follows.
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
Previously updated : 10/03/2022 Last updated : 10/31/2022 # Provision access by data owner for Azure SQL Database (preview)
[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to Azure SQL Database. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. The first two actions are supported only at server level. *Modify* is not supported at this point.
+This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to Azure SQL Database. The following actions are currently enabled: *Read*. *Modify* is not supported at this point.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
Once your data source has the **Data Use Management** toggle *Enabled*, it will
## Create and publish a data owner policy
-Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to one of the examples shown in the images.
+Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example shown.
-**Example #1: SQL Performance Monitor policy**. This policy assigns the Azure AD principal 'Mateo Gomez' to the *SQL Performance monitoring* action, in the scope of SQL server *relecloud-sql-srv2*. This policy has also been published to that server. Note: Policies related to this action are not supported below server level.
-
-![Screenshot shows a sample data owner policy giving SQL Performance Monitor access to an Azure SQL Database.](./media/how-to-policies-data-owner-sql/data-owner-policy-example-azure-sql-db-performance-monitor.png)
-
-**Example #2: SQL Security Auditor policy**. Similar to example 1, but choose the *SQL Security auditing* action (instead of *SQL Performance monitoring*), when authoring the policy. Note: Policies related to this action are not supported below server level.
-
-**Example #3: Read policy**. This policy assigns the Azure AD principal 'Robert Murphy' to the *SQL Data reader* action, in the scope of SQL server *relecloud-sql-srv2*. This policy has also been published to that server. Note: Policies related to this action are supported below server level (e.g., database, table)
+**Example: Read policy**. This policy assigns the Azure AD principal 'Robert Murphy' to the *SQL Data reader* action, in the scope of SQL server *relecloud-sql-srv2*. This policy has also been published to that server. Note that policies related to this action are supported below server level (e.g., database, table)
![Screenshot shows a sample data owner policy giving Data Reader access to an Azure SQL Database.](./media/how-to-policies-data-owner-sql/data-owner-policy-example-azure-sql-db-data-reader.png)
This section contains a reference of how actions in Microsoft Purview data polic
||Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Rows| ||Microsoft.Sql/Sqlservers/Databases/Schemas/Views/Rows | |||
-| *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect |
-||Microsoft.Sql/sqlservers/databases/Connect |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabasePerformanceState/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/ServerPerformanceState/rows/select |
-|||
-| *SQL Security Auditor* |Microsoft.Sql/sqlservers/Connect |
-||Microsoft.Sql/sqlservers/databases/Connect |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
-||Microsoft.Sql/sqlservers/databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
-|||
## Next steps Check blog, demo and related how-to guides
-* [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
* [Concepts for Microsoft Purview data owner policies](./concept-policies-data-owner.md)
-* Blog: [Microsoft Purview Data Policy for SQL DevOps access provisioning now in public preview](https://techcommunity.microsoft.com/t5/microsoft-purview-blog/microsoft-purview-data-policy-for-sql-devops-access-provisioning/ba-p/3403174)
-* Blog: [Controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md) * [Enable Microsoft Purview data owner policies on an Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md)
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-This how-to guide covers how to provision access from Microsoft Purview to Arc-enabled SQL Server system metadata (DMVs and DMFs) via *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
+This how-to guide covers how to provision access from Microsoft Purview to Arc-enabled SQL Server system metadata (DMVs and DMFs) *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
Title: Create, list, update and delete DevOps policies (preview)
+ Title: Create, list, update and delete Microsoft Purview DevOps policies (preview)
description: Step-by-step guide on provisioning access through Microsoft Purview DevOps policies
Last updated 10/11/2022
-# Create, list, update and delete DevOps policies (preview)
+# Create, list, update and delete Microsoft Purview DevOps policies (preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview How To Policies Purview Account Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-purview-account-delete.md
+
+ Title: Impact of deleting Microsoft Purview account on access policies (preview)
+description: This guide discusses the consequences of deleting a Microsoft Purview account on published access policies
+++++ Last updated : 10/31/2022++
+# Impact of deleting Microsoft Purview account on access policies
++
+## Important considerations
+Deleting a Microsoft Purview account that has active (that is, published) policies will remove those policies. This means that the access to data sources or datasets that was previously provisioned via those policies will also be removed. This can lead to outages, that is, users or groups in your organization not able to access critical data. Review the decision to delete the Microsoft Purview account with the people in Policy Author role at root collection level before proceeding. To find out who holds that role in the Microsoft Purview account, review the section on managing role assignments in this [guide](./how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+
+Before deleting the Microsoft Purview account, it's advisable that you provision access to the users in your organization that need access to datasets using an alternate mechanism or a different Purview account. Then orderly delete or unpublish any active policies
+* [Deleting DevOps policies](how-to-policies-devops-authoring-generic.md#delete-a-devops-policy) - You'll need to delete DevOps policies for them to be unpublished.
+* [Unpublishing Data Owner policies](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+* [Deleting Self-service access policies](how-to-delete-self-service-data-access-policy.md) - You'll need to delete Self-service access policies for them to be unpublished.
+
+## Next steps
+Check these concept guides
+* [DevOps policies](concept-policies-devops.md)
+* [Data owner access policies](concept-policies-data-owner.md)
+* [Self-service access policies](concept-self-service-data-access-policy.md)
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
To create an access policy on an entire Azure subscription or resource group, fo
## Next steps Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.-- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
+- [Devops policies in Microsoft Purview](concept-policies-devops.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog]
## Next steps Follow the below guides to learn more about Microsoft Purview and your data.-- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
+- [DevOps policies in Microsoft Purview](concept-policies-devops.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Previously updated : 05/04/2022 Last updated : 11/01/2022
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
> [!Note] > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
-## Register
-
-This section describes how to register Oracle in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
-
-### Prerequisites for registration
+### Required permissions for scan
-A read-only access to system tables is required.
+Microsoft Purview supports basic authentication (username and password) for scanning Oracle. The Oracle user must have read access to system tables in order to access advanced metadata. For classification, user also needs to have read permission on the tables/views to retrieve sample data.
The user should have permission to create a session and role SELECT\_CATALOG\_ROLE assigned. Alternatively, the user may have SELECT permission granted for every individual system table that this connector queries metadata from:
grant select on V_$INSTANCE to [user];
grant select on v_$database to [user]; ```
-### Authentication for registration
+## Register
-The only supported authentication for an Oracle source is **Basic authentication**.
+This section describes how to register Oracle in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register
To create and run a new scan, do the following:
1. Select **Continue**.
+1. Select a **scan rule set** for classification. You can choose between the system default, existing custom rule sets, or [create a new rule set](create-a-scan-rule-set.md) inline.
+ 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once. 1. Review your scan and select **Save and Run**.
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
Previously updated : 10/21/2022 Last updated : 11/01/2022
When scanning SAP BW source, Microsoft Purview supports extracting technical met
> [!Note] > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
-* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+ * Self-hosted integration runtime communicates with the SAP server over dispatcher port 32NN and gateway port 33NN, where NN is your SAP instance number from 00 to 99. Make sure the outbound traffic is allowed on your firewall.
+
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You need an ABAP developer account to create the RFC function module on the SAP server. For scan execution, the user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
* STFC_CONNECTION (check connectivity) * RFC_SYSTEM_INFO (check system information) * OCS_GET_INSTALLED_COMPS (check software versions)
- * Z_MITI_BW_DOWNLOAD (main metadata import)
+ * Z_MITI_BW_DOWNLOAD (main metadata import, the function module you create following the Purview guide)
+
+ The underlying SAP Java Connector (JCo) libraries may call additional RFC function modules e.g. RFC_PING, RFC_METADATA_GET, etc., refer to [SAP support note 460089](https://launchpad.support.sap.com/#/notes/460089) for details.
## Register
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
Previously updated : 05/04/2022 Last updated : 11/01/2022
When scanning SAP ECC source, Microsoft Purview supports:
> [!Note] > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
-* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
- * STFC_CONNECTION (check connectivity)
- * RFC_SYSTEM_INFO (check system information)
+ * Self-hosted integration runtime communicates with the SAP server over dispatcher port 32NN and gateway port 33NN, where NN is your SAP instance number from 00 to 99. Make sure the outbound traffic is allowed on your firewall.
+
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. For scan execution, the user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+
+ * STFC_CONNECTION (check connectivity)
+ * RFC_SYSTEM_INFO (check system information)
+ * OCS_GET_INSTALLED_COMPS (check software versions)
+ * Z_MITI_DOWNLOAD (main metadata import, the function module you create following the Purview guide)
+
+ The underlying SAP Java Connector (JCo) libraries may call additional RFC function modules e.g. RFC_PING, RFC_METADATA_GET, etc., refer to [SAP support note 460089](https://launchpad.support.sap.com/#/notes/460089) for details.
## Register
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
Previously updated : 01/20/2022 Last updated : 11/01/2022
When scanning SAP S/4HANA source, Microsoft Purview supports:
> [!Note] > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
-* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
- * STFC_CONNECTION (check connectivity)
- * RFC_SYSTEM_INFO (check system information)
+ * Self-hosted integration runtime communicates with the SAP server over dispatcher port 32NN and gateway port 33NN, where NN is your SAP instance number from 00 to 99. Make sure the outbound traffic is allowed on your firewall.
+
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. For scan execution, the user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+
+ * STFC_CONNECTION (check connectivity)
+ * RFC_SYSTEM_INFO (check system information)
+ * OCS_GET_INSTALLED_COMPS (check software versions)
+ * Z_MITI_DOWNLOAD (main metadata import, the function module you create following the Purview guide)
+
+ The underlying SAP Java Connector (JCo) libraries may call additional RFC function modules e.g. RFC_PING, RFC_METADATA_GET, etc., refer to [SAP support note 460089](https://launchpad.support.sap.com/#/notes/460089) for details.
## Register
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Previously updated : 05/04/2022 Last updated : 11/01/2022
When setting up scan, you can choose to scan an entire Teradata server, or scope
### Required permissions for scan
-Microsoft Purview supports basic authentication (username and password) for scanning Teradata. The Teradata user must have read access to system tables in order to access advanced metadata.
+Microsoft Purview supports basic authentication (username and password) for scanning Teradata. The Teradata user must have read access to system tables in order to access advanced metadata. For classification, user also needs to have read permission on the tables/views to retrieve sample data.
To retrieve data types of view columns, Microsoft Purview issues a prepare statement for `select * from <view>` for each of the view queries and parse the metadata that contains the data type details for better performance. It requires the SELECT data permission on views. If the permission is missing, view column data types will be skipped.
Follow the steps below to scan Teradata to automatically identify assets. For mo
1. Select **Continue**.
+1. Select a **scan rule set** for classification. You can choose between the system default, existing custom rule sets, or [create a new rule set](create-a-scan-rule-set.md) inline.
+ 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once. 1. Review your scan and select **Save and Run**.
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
You can embed Boolean operators in a query string to improve the precision of a
|-- |--|-| | `+` | `pool + ocean` | An AND operation. For example, `pool + ocean` stipulates that a document must contain both terms.| | `|` | `pool | ocean` | An OR operation finds a match when either term is found. In the example, the query engine will return match on documents containing either `pool` or `ocean` or both. Because OR is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
-| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p></p>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that don't contain the term `luxury`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and don't contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p> When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p></p>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `pool - ocean` will match documents that either contain the term `pool` or those that don't contain the term `ocean`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default `-` will be interpreted as "AND NOT". For example, with `searchMode=and`, the query `pool - ocean` will match documents that contain the term "pool" and all documents that don't contain the term "ocean". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p> When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
<a name="prefix-search"></a>
search Search Get Started Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vs-code.md
Title: 'Quickstart: Use Visual Studio Code with Search'
+ Title: 'Use Visual Studio Code with Search'
-description: Learn how to install and use the Visual Studio Code extension for Azure Cognitive Search.
--
+description: This article provides documentation for the Visual Studio Code extension for Azure Cognitive Search.
++ Previously updated : 08/19/2022 Last updated : 10/31/2022
-# Get started with Azure Cognitive Search using Visual Studio Code
+# Work with Azure Cognitive Search using the Visual Studio Code extension (preview - retired)
-This article explains how to formulate REST API requests interactively using the [Azure Cognitive Search REST APIs](/rest/api/searchservice) and [Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch). With the [Visual Studio code extension for Azure Cognitive Search (preview)](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) and these instructions, you can send requests and view responses before writing any code.
+> [!IMPORTANT]
+> The Visual Studio Code Extension for Azure Cognitive Search was introduced as a **public preview feature** under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's now discontinued.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you have an existing installation of Visual Studio Code Extension for Azure Cognitive Search, you can continue to use it, but it will no longer be updated, and it isn't guaranteed to work with future versions of Azure Cognitive Search.
-> [!IMPORTANT]
-> This extension is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article is for current users of the extension.
## Prerequisites
-The following services and tools are required for this quickstart.
- + [Visual Studio Code](https://code.visualstudio.com/download)
-+ [Azure Cognitive Search for Visual Studio Code (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
-
-+ [Create an Azure Cognitive Search service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
-
-## Install the extension
-
-Start [Visual Studio Code](https://code.visualstudio.com). Select the **Extensions** tab on the activity bar, then search for *Azure Cognitive Search*. Find the extension in the search results, and select **Install**.
-
-![VS Code extension pane](media/search-get-started-rest/download-extension.png "Downloading the VS Code extension")
-
-Alternatively, you can install the [Azure Cognitive Search extension](https://aka.ms/vscode-search) from the Visual Studio Code marketplace in a web browser.
-
-You should see a new Azure tab appear on the activity bar if you don't already have it.
++ Although the extension is no longer available in the Visual Studio Code Marketplace, the code is open sourced at [https://github.com/microsoft/vscode-azurecognitivesearch](https://github.com/microsoft/vscode-azurecognitivesearch). You can clone and modify the tool for your own use.
-![VS Code Azure pane](media/search-get-started-rest/azure-pane.png "Azure pane in VS Code")
++ [Azure Cognitive Search service](search-create-service-portal.md) ## Connect to your subscription
search Search Query Odata Orderby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-orderby.md
Each clause has sort criteria, optionally followed by a sort direction (`asc` fo
The sort criteria can either be the path of a `sortable` field or a call to either the [`geo.distance`](search-query-odata-geo-spatial-functions.md) or the [`search.score`](search-query-odata-search-score-function.md) functions.
+For string fields, the default [ASCII sort order](https://en.wikipedia.org/wiki/ASCII#Printable_characters) and default [Unicode sort order](https://en.wikipedia.org/wiki/List_of_Unicode_characters) will be used. By default, sorting is case sensitive but you can use a [normalizer](search-normalizers.md) to preprocess the text before sorting to change this behavior. You can also use an `asciifolding` normalizer to convert non-ASCII characters to their ASCII equivalent, if one exists.
+ If multiple documents have the same sort criteria and the `search.score` function isn't used (for example, if you sort by a numeric `Rating` field and three documents all have a rating of 4), ties will be broken by document score in descending order. When document scores are the same (for example, when there's no full-text search query specified in the request), then the relative ordering of the tied documents is indeterminate. You can specify multiple sort criteria. The order of expressions determines the final sort order. For example, to sort descending by score, followed by Rating, the syntax would be `$orderby=search.score() desc,Rating desc`.
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Previously updated : 05/21/2021 Last updated : 11/01/2022 ms.devlang: javascript
ms.devlang: javascript
# 2 - Create and load Search Index with JavaScript Continue to build your Search-enabled website by:
-* Creating a Search resource with the VS Code extension
-* Creating a new index and importing data with JavaScript using the sample script and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents).
+* Create a Search resource with the VS Code extension
+* Create a new index
+* Import data with JavaScript using the [sample script](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/bulk-insert/bulk_insert_books.js) and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents).
## Create an Azure Search resource
Get your Search resource admin key with the Visual Studio Code extension.
:::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
-1. Keep this admin key, you will need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
+1. Keep this admin key, you'll need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
## Prepare the bulk import script for Search
The script uses the Azure SDK for Cognitive Search:
* [npm package @azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) * [Reference Documentation](/javascript/api/overview/azure/search-documents-readme)
-1. In Visual Studio Code, open the `bulk_insert_books.js` file in the subdirectory, `search-website/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+1. In Visual Studio Code, open the `bulk_insert_books.js` file in the subdirectory, `search-website-functions-v4/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
* YOUR-SEARCH-RESOURCE-NAME * YOUR-SEARCH-ADMIN-KEY
- :::code language="javascript" source="~/azure-search-javascript-samples/search-website/bulk-insert/bulk_insert_books.js" highlight="16,17" :::
+ :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" highlight="16,17" :::
-1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-insert`, and run the following command to install the dependencies.
+1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, and run the following command to install the dependencies.
```bash npm install
The script uses the Azure SDK for Cognitive Search:
## Run the bulk import script for Search
-1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-insert`, to run the following bash command to run the `bulk_insert_books.js` script:
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, to run the following bash command to run the `bulk_insert_books.js` script:
```javascript npm start
The script uses the Azure SDK for Cognitive Search:
## Review the new Search Index
-Once the upload completes, the Search Index is ready to use. Review your new Index.
-
-1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
-
-1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
## Rollback bulk import file changes
-Use the following git command in the VS Code integrated terminal at the `bulk-insert` directory, to rollback the changes. They are not needed to continue the tutorial and you shouldn't save or push these secrets to your repo.
-
-```git
-git checkout .
-```
## Copy your Search resource name
-Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
-> [!CAUTION]
-> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
## Next steps
search Tutorial Javascript Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-deploy-static-web-app.md
Title: "JavaScript tutorial: Deploy search-enabled website"
-description: Deploy search-enabled website to Azure Static web app.
+description: Deploy search-enabled website to Azure Static Web Apps.
Previously updated : 08/30/2022 Last updated : 10/26/2022 ms.devlang: javascript # 3 - Deploy the search-enabled website
-Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
-
-The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
-
-## Create a Static Web App in Visual Studio Code
-
-1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
-
-1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
-
-1. If you see a pop-up window in VS Code asking which branch you want to deploy from, select the default branch, usually **master** or **main**.
-
- This setting means only changes you commit to that branch are deployed to your static web app.
-
-1. If you see a pop-up window asking you to commit your changes, do not do this. The secrets from the bulk import step should not be committed to the repository.
-
- To rollback the changes, in VS Code select the Source Control icon in the Activity bar, then select each changed file in the Changes list and select the **Discard changes** icon.
-
-1. Follow the prompts to provide the following information:
-
- |Prompt|Enter|
- |--|--|
- |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-javascript-samples`. |
- |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
- |Select a SKU| Select the free SKU for this tutorial.|
- |Choose build preset to configure default project structure.|Select **Custom**|
- |Select the location of your application code|`search-website`<br><br>This is the path, from the root of the repository, to your Azure Static web app. |
- |Select the location of your Azure Function code|`search-website/api`<br><br>This is the path, from the root of the repository, to your Azure Function app. |
- |Enter the path of your build output...|`build`<br><br>This is the path, from your Azure Static web app, to your generated files.|
- |Select a location for new resources.|Select a region close to you.|
-
-1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
-
- The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
-
- Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
-
-## Get Cognitive Search query key in Visual Studio Code
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
-
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
-
-1. Keep this query key, you will need to use it in the next section. The query key is able to query your Index.
-
-## Add configuration settings in Azure portal
-
-The Azure Function app won't return Search data until the Search secrets are in settings.
-
-1. Select **Azure** from the Activity Bar.
-1. Right-click on your Static web app resource then select **Open in Portal**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Right-click on your JavaScript Static web app resource then select Open in Portal.":::
-
-1. Select **Configuration** then select **+ Add**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Configuration then select Add for your JavaScript app.":::
-
-1. Add each of the following settings:
-
- |Setting|Your Search resource value|
- |--|--|
- |SearchApiKey|Your Search query key|
- |SearchServiceName|Your Search resource name|
- |SearchIndexName|`good-books`|
- |SearchFacets|`authors*,language_code`|
-
- Azure Cognitive Search requires different syntax for filtering collections than it does for strings. Add a `*` after a field name to denote that the field is of type `Collection(Edm.String)`. This allows the Azure Function to add filters correctly to queries.
-
-1. Select **Save** to save the settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Save to save the settings.":::
-
-1. Return to VS Code.
-1. Refresh your Static web app to see the Static web app's application settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/visual-studio-code-extension-fresh-resource.png" alt-text="Refresh your Static web app to see the Static web app's application settings.":::
-
-## Use search in your Static web app
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Static web apps` area and find the Static web app you created for this tutorial.
-1. Right-click the Static Web App name and select **Browse site**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
-
-1. Select **Open** in the pop-up dialog.
-1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
-1. Review the results then select one of the books to see more details.
-
-## Clean up resources
-
-To clean up the resources created in this tutorial, delete the resource group.
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
-1. Right-click the resource group name then select **Delete**.
- This deletes both the Search and Static web app resources.
-1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
- ## Next steps
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
Title: "JavaScript tutorial: Search integration overview"
-description: Technical overview and setup for adding search to a website and deploying to Azure Static Web App.
+description: Technical overview and setup for adding search to a website and deploying to an Azure Static Web Apps.
Previously updated : 08/30/2022 Last updated : 10/26/2022 ms.devlang: javascript # 1 - Overview of adding search to a website
-This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App.
+This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web Apps resource.
The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## What does the sample do?
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the Search Index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the Search Index, of the book.
--
-The search experience includes:
-
-* Search ΓÇô provides search functionality for the application.
-* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
-* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4) includes the following:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/src)|
-|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using JavaScript SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/src)|
-|Bulk insert|JavaScript file to create the index and add documents to it.|[/search-website/bulk-insert](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/bulk-insert)|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4/client)|
+|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using JavaScript SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4/api)|
+|Bulk insert|JavaScript file to create the index and add documents to it.|[/search-website-functions-v4/bulk-insert](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4/bulk-insert)|
## Set up your development environment Install the following for your local development environment. -- [Node.js 12 or 14](https://nodejs.org/en/download)
+- [Node.js LTS](https://nodejs.org/en/download)
+ - Select latest runtime and version from this [list of supported language versions](/azure/azure-functions/functions-versions?tabs=azure-cli%2Clinux%2Cin-process%2Cv4&pivots=programming-language-javascript#languages).
- If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (nvm) or a Docker container. - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
- - [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
+ - [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional:
- - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash) globally with the following bash command:
+ - This tutorial doesn't run the Azure Function API locally. If you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash) globally with the following bash command:
```bash
- npm install -g azure-functions-core-tools
+ npm install -g azure-functions-core-tools@4
``` ## Fork and clone the search sample with git
-Forking the sample repository is critical to be able to deploy the Static Web App. The web apps determine the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with Azure Static Web Apps reading from the code in your forked sample.
+Forking the sample repository is critical to be able to deploy the Static Web App. The static web app determines the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with the static web app reading from the code in your forked sample.
-1. On GitHub, fork the [sample repository](https://github.com/Azure-Samples/azure-search-javascript-samples).
+1. On GitHub, [fork the sample repository](https://github.com/Azure-Samples/azure-search-javascript-samples/fork).
Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
-1. At a bash terminal, download the sample application to your local computer.
-
- Replace `YOUR-GITHUB-ALIAS` with your GitHub alias.
-
- ```bash
- git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-javascript-samples
- ```
-
-1. In Visual Studio Code, open your local folder of the cloned repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
## Create a resource group for your Azure resources
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In Resources, select Add (**+**), and then select **Create Resource Group**.
-
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
-1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
-1. Select a location close to you.
-1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
-
- Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
## Next steps
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-search-query-integration.md
Previously updated : 03/09/2021 Last updated : 10/26/2022 ms.devlang: javascript # 4 - JavaScript Search integration cheat sheet
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know.
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know.
The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## Azure SDK @azure/search-documents
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
## Configure secrets in a configuration file ## Azure Function: Search the catalog
-The `Search` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Search/index.js) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The `Search` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Search/index.js) takes a search term and searches across the documents in the Search Index, returning a list of matches.
-Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Search/function.json) bindings.
+Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Search/function.json) bindings.
The Azure Function pulls in the Search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Suggest/index.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Suggest/index.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
-The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/bulk-insert/good-books-index.json) used during bulk upload.
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/bulk-insert/good-books-index.json) used during bulk upload.
-Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Suggest/function.json) bindings.
+Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Suggest/function.json) bindings.
## Client: Suggestions from the catalog The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization: ## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Lookup/index.js) takes a ID and returns the document object from the Search Index.
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Lookup/index.js) takes an ID and returns the document object from the Search Index.
Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Lookup/function.json) bindings. ## Client: Get specific document This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization: ## Next steps
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 10/12/2022 Last updated : 10/31/2022
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
## November 2022
-|Sample&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
-||--|
-| [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation) | This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. |
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [Visual Studio Code extension for Azure Cognitive Search](https://github.com/microsoft/vscode-azurecognitivesearch/blob/master/README.md) | Feature | **Retired**. This preview feature isn't moving forward to general availability and has been removed from Visual Studio Code Marketplace. See the [documentation](search-get-started-vs-code.md) for details. |
+| [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation) | Sample | This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. |
## October 2022
-|Content&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
-||--|
-| [Compliance risk analysis using Azure Cognitive Search](/azure/architecture/guide/ai/compliance-risk-analysis) | Published on Azure Architecture Center, this guide covers the implementation of a compliance risk analysis solution that uses Azure Cognitive Search. |
-| [Beiersdorf customer story using Azure Cognitive Search](https://customers.microsoft.com/story/1552642769228088273-Beiersdorf-consumer-goods-azure-cognitive-search) | This customer story showcases semantic search and document summarization to provide researchers with ready access to institutional knowledge. |
+|Item &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Compliance risk analysis using Azure Cognitive Search](/azure/architecture/guide/ai/compliance-risk-analysis) | Content | Published on Azure Architecture Center, this guide covers the implementation of a compliance risk analysis solution that uses Azure Cognitive Search. |
+| [Beiersdorf customer story using Azure Cognitive Search](https://customers.microsoft.com/story/1552642769228088273-Beiersdorf-consumer-goods-azure-cognitive-search) | Content | This customer story showcases semantic search and document summarization to provide researchers with ready access to institutional knowledge. |
## September 2022
-|Sample&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
-||--|
-| [Azure Cognitive Search Lab](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | This C# sample provides the source code for building a web front-end that accesses all of the REST API calls against an index. This tool is used by support engineers to investigate customer support issues. You can try this [demo site](https://azuresearchlab.azurewebsites.net/) before building your own copy. |
-| [Event-driven indexing for Cognitive Search](https://github.com/aditmer/Event-Driven-Indexing-For-Cognitive-Search/blob/main/README.md) | This C# sample is an Azure Function app that demonstrates event-driven indexing in Azure Cognitive Search. If you've used indexers and skillsets before, you know that indexers can run on demand or on a schedule, but not in response to events. This demo shows you how to set up an indexing pipeline that responds to data update events. |
+|Item &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Azure Cognitive Search Lab](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Sample | This C# sample provides the source code for building a web front-end that accesses all of the REST API calls against an index. This tool is used by support engineers to investigate customer support issues. You can try this [demo site](https://azuresearchlab.azurewebsites.net/) before building your own copy. |
+| [Event-driven indexing for Cognitive Search](https://github.com/aditmer/Event-Driven-Indexing-For-Cognitive-Search/blob/main/README.md) | Sample | This C# sample is an Azure Function app that demonstrates event-driven indexing in Azure Cognitive Search. If you've used indexers and skillsets before, you know that indexers can run on demand or on a schedule, but not in response to events. This demo shows you how to set up an indexing pipeline that responds to data update events. |
## August 2022
-|Tutorial&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
-||--|
-| [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md) | This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Cognitive Services to get AI enrichment without skillsets and indexers. |
+|Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md) | Content | This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Cognitive Services to get AI enrichment without skillsets and indexers. |
## June 2022
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Semantic search](semantic-search-overview.md) | New support for Storage Optimized tiers (L1, L2) | Public preview. |
-| [Debug Sessions](cognitive-search-debug-session.md) | Debug sessions, a built-in editor that runs in Azure portal, is now generally available. | Generally available. |
+|Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Semantic search (preview)](semantic-search-overview.md) | Feature | New support for Storage Optimized tiers (L1, L2). |
+| [Debug Sessions](cognitive-search-debug-session.md) | Feature | **General availability**. Debug sessions, a built-in editor that runs in Azure portal, is now generally available. |
## May 2022
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Power Query connector preview](search-how-to-index-power-query-data-sources.md) | This indexer data source was introduced in May 2021 but won't be moving forward. Please migrate your data ingestion code by November 2022. See the feature documentation for migration guidance. | Retired |
+|Item &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Power Query connector preview](search-how-to-index-power-query-data-sources.md) | Feature | **Retired**. This indexer data source was introduced in May 2021 but won't be moving forward. Migrate your data indexing code by November 2022. See the feature documentation for migration guidance. |
## February 2022
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Index aliases](search-how-to-alias.md) | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias. | Public preview REST APIs (no portal support at this time).|
+|Item &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|||-|
+| [Index aliases](search-how-to-alias.md) | Feature | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. When index names change, for example if you version the index, instead of updating the references to an index name in your application, you can just update the mapping for your alias. |
## 2021 announcements | Month | Feature | Description | |-||-|
-| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | This is a new addition to the 2021-04-30-Preview API, and are now required for semantic queries. Public preview in the portal and preview REST APIs.|
+| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries. Public preview in the portal and preview REST APIs.|
| November | [Azure Files indexer (preview)](./search-file-storage-integration.md) | Public preview in the portal and preview REST APIs.| | July | [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Public preview announcement. | | July | [Role-based access control for data plane (preview)](search-security-rbac.md) | Public preview announcement. |
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/quickstart-onboard.md
Microsoft Sentinel comes with many connectors for Microsoft products, for exampl
- Single-region data residency is currently provided only in the Southeast Asia (Singapore) region of the Asia Pacific geography, and in the Brazil South (Sao Paulo State) region of the Brazil geography.
- > [!IMPORTANT]
- > - By enabling certain rules that make use of the machine learning (ML) engine, **you give Microsoft permission to copy relevant ingested data outside of your Microsoft Sentinel workspace's geography** as may be required by the machine learning engine to process these rules.
- ## Enable Microsoft Sentinel <a name="enable"></a> 1. Sign in to the Azure portal. Make sure that the subscription in which Microsoft Sentinel is created is selected.
For more information, see:
- **Get started**: - [Get started with Microsoft Sentinel](get-visibility.md) - [Create custom analytics rules to detect threats](detect-threats-custom.md)
- - [Connect your external solution using Common Event Format](connect-common-event-format.md)
+ - [Connect your external solution using Common Event Format](connect-common-event-format.md)
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
By default, all analytics rules provided in the Microsoft Sentinel Solution for
7. Function module tested 8. The SAP audit log monitoring analytics rules
-## Reduce the amount of SAP log ingestion
+## Enable or disable the ingestion of specific SAP logs
-To reduce the number of logs ingested into the Microsoft Sentinel workspace, you can stop ingestion for a specific log. To do this, edit the *systemconfig.ini* file, and for the relevant log, change the `True` value to `False`.
+To enable or disable the ingestion of a specific log:
+
+1. Edit the *systemconfig.ini* file located under */opt/sapcon/SID/* on the connector's VM.
+1. Inside the configuration file, locate the relevant log and do one of the following:
+ - To enable the log, change the value to `True`.
+ - To disable the log, change the value to `False`.
-For example, to stop the `ABAPJobLog`, change its value to `False`:
+For example, to stop ingestion for the `ABAPJobLog`, change its value to `False`:
``` ABAPJobLog = False ```
+Review the list of available logs in the [Systemconfig.ini file reference](reference-systemconfig.md#logs-activation-status-section).
-You can also [stop the user master data tables](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
+You can also [stop ingesting the user master data tables](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
> [!NOTE] >
-> Once you stop one of the logs, the workbooks and analytics queries that use that log may not work.
+> Once you stop one of the logs or tables, the workbooks and analytics queries that use that log may not work.
> [Understand which log each workbook uses](sap-solution-security-content.md#built-in-workbooks) and [understand which log each analytic rule uses](sap-solution-security-content.md#built-in-analytics-rules). ## Stop log ingestion and disable the connector
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
docker logs -f sapcon-[SID]
**Enable debug mode printing**:
-1. On your VM, edit the **sapcon/[SID]/systemconfig.ini** file.
+1. On your VM, edit the **/opt/sapcon/[SID]/systemconfig.ini** file.
1. Define the **General** section if it wasn't previously defined. In this section, define `logging_debug = True`.
The change takes effect two minutes after you save the file. You don't need to r
**Disable debug mode printing**:
-1. On your VM, edit the **sapcon/[SID]/systemconfig.ini** file.
+1. On your VM, edit the **/opt/sapcon/[SID]/systemconfig.ini** file.
1. In the **General** section, define `logging_debug = False`.
The change takes effect two minutes after you save the file. You don't need to r
The change takes effect two minutes after you save the file. You don't need to restart the Docker container.
-## View all Docker execution logs
+## View all container execution logs
-To view all Docker execution logs for your Microsoft Sentinel Solution for SAP data connector deployment, run one of the following commands:
-
-```bash
-docker exec -it sapcon-[SID] bash && cd /sapcon-app/sapcon/logs
-```
-
-or
-
-```bash
-docker exec ΓÇôit sapcon-[SID] cat /sapcon-app/sapcon/logs/[FILE_LOGNAME]
-```
-
-Output similar to the following should be displayed:
-
-```bash
-Logs directory:
-root@644c46cd82a9:/sapcon-app# ls sapcon/logs/ -l
-total 508
--rwxr-xr-x 1 root root 0 Mar 12 09:22 ' __init__.py'--rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPAppLog.log--rw-r--r-- 1 root root 1056 Mar 12 16:01 ABAPAuditLog.log--rw-r--r-- 1 root root 465 Mar 12 16:01 ABAPCRLog.log--rw-r--r-- 1 root root 515 Mar 12 16:01 ABAPChangeDocsLog.log--rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPJobLog.log--rw-r--r-- 1 root root 480 Mar 12 16:01 ABAPSpoolLog.log--rw-r--r-- 1 root root 525 Mar 12 16:01 ABAPSpoolOutputLog.log--rw-r--r-- 1 root root 0 Mar 12 15:51 ABAPTableDataLog.log--rw-r--r-- 1 root root 495 Mar 12 16:01 ABAPWorkflowLog.log--rw-r--r-- 1 root root 465311 Mar 14 06:54 API.log # view this log to see submits of data into Microsoft Sentinel--rw-r--r-- 1 root root 0 Mar 12 15:51 LogsDeltaManager.log--rw-r--r-- 1 root root 0 Mar 12 15:51 PersistenceManager.log--rw-r--r-- 1 root root 4830 Mar 12 16:01 RFC.log--rw-r--r-- 1 root root 5595 Mar 12 16:03 SystemAdmin.log
-```
-
-To copy your logs to the host operating system, run:
-
-```bash
-docker cp sapcon-[SID]:/sapcon-app/sapcon/logs /directory
-```
-
-For example:
-
-```bash
-docker cp sapcon-A4H:/sapcon-app/sapcon/logs /tmp/sapcon-logs-extract
-```
+Connector execution logs for your Microsoft Sentinel Solution for SAP data connector deployment are stored in **/opt/sapcon/[SID]/log**. Log filename is **OmniLog.log**. A history of logfiles is kept, suffixed with *.<number>* such as **OmniLog.log.1**, **OmniLog.log.2** etc
## Review and update the Microsoft Sentinel for SAP data connector configuration
The following steps reset the connector and reingest SAP logs from the last 30 m
docker stop sapcon-[SID] ```
-1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+1. Delete the **metadata.db** file from the **/opt/sapcon/[SID]** directory. Run:
```bash cd /opt/sapcon/<SID>
Docker cp SDK by running docker cp nwrfc750P_8-70002752.zip /sapcon-app/inst/
If ABAP runtime errors appear on large systems, try setting a smaller chunk size:
-1. Edit the **sapcon/[SID]/systemconfig.ini** file and in the **Connector Configuration** section define `timechunk = 5`.
+1. Edit the **/opt/sapcon/[SID]/systemconfig.ini** file and in the **Connector Configuration** section define `timechunk = 5`.
For example:
If you attempt to retrieve an audit log, without the [required change request](p
While your system should automatically switch to compatibility mode if needed, you may need to switch it manually. To switch to compatibility mode manually:
-1. Edit the **sapcon/[SID]/systemconfig.ini** file
+1. Edit the **/opt/sapcon/[SID]/systemconfig.ini** file
1. In the **Connector Configuration** section defineefine: `auditlogforcexal = True`
To check for misconfigurations, run the **RSDBTIME** report in transaction **SE3
docker stop sapcon-[SID] ```
-1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+1. Delete the **metadata.db** file from the **/opt/sapcon/[SID]** directory. Run:
```bash
- rm ~/sapcon/[SID]/metadata.db
+ rm /opt/sapcon/[SID]/metadata.db
``` 1. Update the SAP system and the SAP host operating system to have matching settings, such as the same time zone. For more information, see the [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/Basis/Time+zone+settings%2C+SAP+vs.+OS+level).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
### Account enrichment fields removed from Azure AD Identity Protection connector
-As of **September 30, 2022**, alerts coming from the **Azure Activity Directory Information Protection connector** no longer contain the following fields:
+As of **September 30, 2022**, alerts coming from the **Azure Active Directory Identity Protection connector** no longer contain the following fields:
- CompromisedEntity - ExtendedProperties["User Account"]
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
For latest Runtime and SDK you can download from below:
| Package |Version| | | |
-|[Install Service fabric runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.0.1107.9590.exe) | 9.0.1107 |
-|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.0.1107.msi) | 6.0.1107 |
+|[Install Service fabric runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.1.1390.9590.exe) | 9.1.1390 |
+|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.1.1390.msi) | 6.1.1390 |
You can find direct links to the installers for previous releases on [Service Fabric Releases](https://github.com/microsoft/service-fabric/tree/master/release_notes)
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
### Current versions | Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
-| 9.0 CU4<br>9.0.1121.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 9.0 CU3<br>9.0.1107.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 9.1 RTO<br>9.1.1390.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 9.0 CU4<br>9.0.1121.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 CU3<br>9.0.1107.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
| 8.2CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
### Current versions | Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
-| 9.0 CU4<br>9.0.1114.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 9.1 RTO<br>9.1.1206.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 9.0 CU4<br>9.0.1114.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |
+| 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |
| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
-| 9.0 CU3 | 9.0.1121.9590 | 9.0.1114.1 |
+| 9.1 RTO | 9.1.1390.9590| 9.1.1206.1 |
+| 9.0 CU4 | 9.0.1121.9590 | 9.0.1114.1 |
| 9.0 CU3 | 9.0.1107.9590 | 9.0.1103.1 | | 9.0 CU2.1 | Not applicable | 9.0.1086.1 | | 8.2 CU6 | 8.2.1686.9590 | 8.2.1485.1 |
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Site Recovery allows you to perform global disaster recovery. You can repl
-- | -- America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, West US 3, Central US, North Central US Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North (UAE is treated as part of the Europe geo cluster)
-Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South
+Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South, Qatar Central
JIO | JIO India West<br/><br/>Replication cannot be done between JIO and non-JIO regions for Virtual Machines present in JIO subscriptions. This is because JIO subscriptions can have resources only in JIO regions. Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2 Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas, US DOD East, US DOD Central
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
FIPS (Federal Information Processing Standards) | Do not enable FIPS mode|
|Fully qualified domain name (FQDN) | Static| |Ports | 443 (Control channel orchestration)<br>9443 (Data transport)| |NIC type | VMXNET3 (if the appliance is a VMware VM)|
+|NAT | Supported |
#### Allow URLs
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
With Azure Spring Apps, you can bind select Azure services to your applications
## Prerequisites * An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
-* An Azure Database for PostgreSQL Flexible Server instance.
+* An Azure Database for MySQL Flexible Server instance.
* [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher. ## Prepare your Java project
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
az spring connection create postgres \
## Next steps
-In this article, you learned how to bind an application in Azure Spring Apps to an Azure Database for MySQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
+In this article, you learned how to bind an application in Azure Spring Apps to an Azure Database for PostgreSQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article describes how to secure outbound traffic from your applications hosted in Azure Spring Apps. The article provides an example of a user-defined route (UDR) instance. UDR is an advanced feature that lets you fully control egress traffic. You can use UDR in scenarios such as disallowing an Azure Spring Apps auto-generated public IP.
+This article describes how to secure outbound traffic from your applications hosted in Azure Spring Apps. The article provides an example of a user-defined route. A user-defined route is an advanced feature that lets you fully control egress traffic. You can use a user-defined route in scenarios such as disallowing an Azure Spring Apps autogenerated public IP address.
## Prerequisites -- All prerequisites for deploying Azure Spring Apps in a virtual network. For more information, see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).-- API version of `2022-09-01 preview` or greater-- [Azure CLI version 1.1.7 or later](/cli/azure/install-azure-cli).-- You should be familiar with information in the following articles:
+- All prerequisites for [deploying Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md)
+- An API version of `2022-09-01 preview` or later
+- [Azure CLI version 1.1.7 or later](/cli/azure/install-azure-cli)
+- Familiarity with information in the following articles:
- [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md)
- - [Customer responsibilities for running Azure Spring Apps in VNET](vnet-customer-responsibilities.md)
- - [Customize Azure Spring Cloud egress with a User-Defined Route](concept-outbound-type.md)
+ - [Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md)
+ - [Customize Azure Spring Apps egress with a user-defined route](concept-outbound-type.md)
-## Create a VNet instance using a user-defined route
+## Create a virtual network by using a user-defined route
-The following illustration shows an example of an Azure Spring Apps VNet instance using a user-defined route.
+The following illustration shows an example of an Azure Spring Apps virtual network that uses a user-defined route (UDR).
-### Set configuration using environment variables
+### Define environment variables
-The following example shows how to define a set of environment variables to be used in resource creation.
+The following example shows how to define a set of environment variables to be used in resource creation:
```bash PREFIX="asa-egress"
ASANAME="${PREFIX}"
VNET_NAME="${PREFIX}-vnet" ASA_APP_SUBNET_NAME="asa-app-subnet" ASA_SERVICE_RUNTIME_SUBNET_NAME="asa-service-runtime-subnet"
-# DO NOT CHANGE FWSUBNET_NAME - This is currently a requirement for Azure Firewall.
+# Do not change FWSUBNET_NAME. This is currently a requirement for Azure Firewall.
FWSUBNET_NAME="AzureFirewallSubnet" FWNAME="${PREFIX}-fw" FWPUBLICIP_NAME="${PREFIX}-fwpublicip"
ASA_NAME="${PREFIX}-instance"
### Create a virtual network with multiple subnets
-This section shows you how to provision a virtual network with three separate subnets: one for the user apps, one for service runtime, and one for the firewall.
+This section shows you how to provision a virtual network with three separate subnets: one for the user apps, one for the service runtime, and one for the firewall.
-First create a resource group, as shown in the following example.
+First create a resource group, as shown in the following example:
```azurecli
-# Create resource group.
+# Create a resource group.
az group create --name $RG --location $LOC ```
-Then create a virtual network with three subnets to host the ASA instance and the Azure Firewall, as shown in the following example.
+Then create a virtual network with three subnets to host the Azure Spring Apps and Azure Firewall instances, as shown in the following example:
```azurecli
-# Dedicated virtual network with ASA app subnet.
+# Dedicated virtual network with an Azure Spring Apps app subnet.
az network vnet create \ --resource-group $RG \
az network vnet create \
--subnet-name $ASA_APP_SUBNET_NAME \ --subnet-prefix 10.42.1.0/24
-# Dedicated subnet for ASA service runtime subnet.
+# Dedicated subnet for the Azure Spring Apps service runtime subnet.
az network vnet subnet create \ --resource-group $RG \
az network vnet subnet create \
--name $ASA_SERVICE_RUNTIME_SUBNET_NAME\ --address-prefix 10.42.2.0/24
-# Dedicated subnet for Azure Firewall. (Firewall name cannot be changed.)
+# Dedicated subnet for Azure Firewall. (Firewall name can't be changed.)
az network vnet subnet create \ --resource-group $RG \
az network vnet subnet create \
--address-prefix 10.42.3.0/24 ```
-### Create and set up an Azure Firewall with a user-defined route
+### Set up an Azure Firewall instance with a user-defined route
-Use the following command to create and set up an Azure Firewall with a user-defined route and configure Azure Firewall outbound rules. The firewall lets you configure granular egress traffic rules from an Azure Spring Apps instance.
+Use the following command to create and set up an Azure Firewall instance with a user-defined route, and to configure Azure Firewall outbound rules. The firewall lets you configure granular egress traffic rules from Azure Spring Apps.
> [!IMPORTANT]
-> If your cluster or application creates a large number of outbound connections directed to the same or small subset of destinations, you might require more firewall frontend IPs to avoid reaching the maximum ports per front-end IP. For more information on how to create an Azure firewall with multiple IPs, see [Quickstart: Create an Azure Firewall with multiple public IP addresses - ARM template](../firewall/quick-create-multiple-ip-template.md). Create a standard SKU public IP resource that will be used as the Azure Firewall front-end address.
+> If your cluster or application creates a large number of outbound connections directed to the same destination or to a small subset of destinations, you might require more firewall front-end IP addresses to avoid reaching the maximum ports per front-end IP address. For more information on how to create an Azure Firewall instance with multiple IP addresses, see [Quickstart: Create an Azure Firewall instance with multiple public IP addresses - ARM template](../firewall/quick-create-multiple-ip-template.md). Create a Standard SKU public IP resource that will be used as the Azure Firewall front-end address.
```azurecli az network public-ip create \
az network public-ip create \
--sku "Standard" ```
-The following example shows how to install the Azure Firewall preview CLI extension and deploy Azure Firewall.
+The following example shows how to install the Azure Firewall preview CLI extension and deploy Azure Firewall:
```azurecli
-# Install Azure Firewall preview CLI extension.
+# Install the Azure Firewall preview CLI extension.
az extension add --name azure-firewall
az network firewall create \
--enable-dns-proxy true ```
-The following example shows how to assign the IP address you created to the firewall front end.
+The following example shows how to assign the IP address that you created to the firewall front end.
> [!NOTE]
-> Setting up the public IP address to the Azure Firewall may take a few minutes. To leverage FQDN on network rules, enable DNS proxy. When enabled, the firewall will listen on port 53 and forward DNS requests to the specified DNS server. The firewall can then translate the FQDN automatically.
+> Setting up the public IP address to the Azure Firewall instance might take a few minutes. To use a fully qualified domain name (FQDN) on network rules, enable a DNS proxy. After you enable the proxy, the firewall will listen on port 53 and forward DNS requests to the specified DNS server. The firewall can then translate the FQDN automatically.
```azurecli
-# Configure firewall IP config.
+# Configure the firewall IP address.
az network firewall ip-config create \ --resource-group $RG \
az network firewall ip-config create \
--vnet-name $VNET_NAME ```
-When the operation has completed, save the firewall front-end IP address for configuration later, as shown in the following example.
+When the operation is finished, save the firewall's front-end IP address for configuration later, as shown in the following example:
```azurecli
-# Capture firewall IP address for later use.
+# Capture the firewall IP address for later use.
FWPUBLIC_IP=$(az network public-ip show \ --resource-group $RG \
FWPRIVATE_IP=$(az network firewall show \
### Create a user-defined route with a hop to Azure Firewall
-Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change Azure's default routing, create a route table.
+Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change the default routing in Azure, create a route table.
-The following example shows how to create a route table to be associated with a specified subnet. The route table defines the next hop, as in the Azure Firewall you created. Each subnet can have one route table associated with it, or could have no associated route table.
+The following example shows how to create a route table to be associated with a specified subnet. The route table defines the next hop, as in the Azure Firewall instance that you created. Each subnet can have one route table associated with it, or it might have no associated route table.
```azurecli
-# Create UDR and add a route for Azure Firewall.
+# Create a user-defined route and add a route for Azure Firewall.
az network route-table create \ --resource-group $RG -l $LOC \
az network route-table route create \
--next-hop-ip-address $FWPRIVATE_IP ```
-### Adding firewall rules
+### Add firewall rules
-The following example shows hot to add rules to your firewall. For more information, see [Customer responsibilities for running Azure Spring Apps in VNET](vnet-customer-responsibilities.md).
+The following example shows how to add rules to your firewall. For more information, see [Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md).
```azurecli # Add firewall network rules.
az network firewall application-rule create \
### Associate route tables with subnets
-To associate the cluster with the firewall, the dedicated subnet for the cluster's subnet must reference the route table you created. App and service runtime subnets must be associated with corresponding route tables. The following example shows how to associate a route table with a subnet.
+To associate the cluster with the firewall, make sure that the dedicated subnet for the cluster references the route table that you created. App and service runtime subnets must be associated with corresponding route tables. The following example shows how to associate a route table with a subnet:
```azurecli
-# Associate route table with next hop to Firewall to the Azure Spring Apps subnet.
+# Associate the route table with a next hop to the firewall for the Azure Spring Apps subnet.
az network vnet subnet update \ --resource-group $RG \
az network vnet subnet update
--route-table $SERVICE_RUNTIME_ROUTE_TABLE_NAME ```
-### Add a role for an Azure Spring Apps RP
+### Add a role for an Azure Spring Apps relying party
-The following example shows how to add a role for an Azure Spring Apps RP.
+The following example shows how to add a role for an Azure Spring Apps relying party:
```azurecli VIRTUAL_NETWORK_RESOURCE_ID=$(az network vnet show \
az role assignment create \
--assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 ```
-### Create a UDR Azure Spring Apps instance
+### Create an Azure Spring Apps instance with user-defined routing
-The following example shows how to create a UDR Azure Spring Apps instance.
+The following example shows how to create an Azure Spring Apps instance with user-defined routing:
```azurecli az spring create \
az spring create \
--outbound-type userDefinedRouting ```
-You can now access the public IP of the firewall from the internet. The firewall will route traffic into Azure Spring Apps subnets according to your routing rules.
+You can now access the public IP address of the firewall from the internet. The firewall will route traffic into Azure Spring Apps subnets according to your routing rules.
## Next steps - [Troubleshooting Azure Spring Apps in virtual networks](troubleshooting-vnet.md)-- [Customer responsibilities for running Azure Spring Apps in VNET](vnet-customer-responsibilities.md)
+- [Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md)
spring-apps How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-move-across-regions.md
Title: How to move an Azure Spring Apps service instance to another region
-description: Describes how to move an Azure Spring Apps service instance to another region
+ Title: Move an Azure Spring Apps service instance to another region
+description: Learn how to move an Azure Spring Apps service instance to another region.
Last updated 01/27/2022-+ # Move an Azure Spring Apps service instance to another region
This article shows you how to move your Azure Spring Apps service instance to another region. Moving your instance is useful, for example, as part of a disaster recovery plan or to create a duplicate testing environment.
-You can't move an Azure Spring Apps instance from one region to another directly, but you can use an Azure Resource Manager template (ARM template) to deploy to a new region. For more information about using Azure Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+You can't move an Azure Spring Apps instance from one region to another directly, but you can use an Azure Resource Manager template (ARM template) to deploy your instance to a new region. For more information about using Azure Resource Manager and templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
-Before you move your service instance, you should be aware of the following limitations:
+Before you move your service instance, consider the following limitations:
- Different feature sets are supported by different pricing tiers (SKUs). If you change the SKU, you may need to change the template to include only features supported by the target SKU.-- You might not be able to move all sub-resources in Azure Spring Apps using the template. Your move may require extra setup after the template is deployed. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section.-- When you move a virtual network (VNet) instance (see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md)), you'll need to create new network resources.
+- You might not be able to move all subresources in Azure Spring Apps using the template. Your move may require extra setup after the template is deployed. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section of this article.
+- When you move a virtual network (VNet) instance, you must create new network resources. For more information, see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
## Prerequisites -- A running Azure Spring Apps instance.-- A target region that supports Azure Spring Apps and its related features.-- [Azure CLI](/cli/azure/install-azure-cli) if you aren't using the Azure portal.
+- An existing Azure Spring Apps service instance. To create a new service instance, see [Quickstart: Deploy your first application in Azure Spring Apps](./quickstart.md).
+- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.11.2 or later.
## Export the template ### [Portal](#tab/azure-portal)
-First, use the following steps to export the template:
+Use the following steps to export the template:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All resources** in the left menu, then select your Azure Spring Apps instance.
+1. Select **All resources** in the left menu, and then select your Azure Spring Apps instance.
1. Under **Automation**, select **Export template**. 1. Select **Download** on the **Export template** pane. 1. Locate the *.zip* file, unzip it, and get the *template.json* file. This file contains the resource template. ### [Azure CLI](#tab/azure-cli)
-First, use the following command to export the template:
+Use the following command to export the template:
```azurecli az login az account set --subscription <resource-subscription-id>
-az group export --resource-group <resource-group> --resource-ids <resource-id>
+az group export \
+ --resource-group <resource-group> \
+ --resource-ids <resource-id>
``` ## Modify the template
-Next, use the following steps to modify the *template.json* file. In the examples shown here, the new Azure Spring Apps instance name is *new-service-name*, and the previous instance name is *old-service-name*.
+Use the following steps to modify the *template.json* file. In the following examples, the new Azure Spring Apps instance name is *new-service-name*. The previous instance name is *old-service-name*.
-1. Change all `name` instances in the template from *old-service-name* to *new-service-name*, as shown in the following example:
+1. The following example shows how to change all `name` instances in the template from *old-service-name* to *new-service-name*:
```json {
Next, use the following steps to modify the *template.json* file. In the example
} ```
-1. Change the `location` instances in the template to the new target location, as shown in the following example:
+1. The following example shows how to change the `location` instances in the template to the new target location:
```json {
Next, use the following steps to modify the *template.json* file. In the example
} ```
-1. If the instance you're moving is a VNet instance, you'll need to update the target VNet resource `parameters` instances in the template, as shown in the following example:
+1. If the instance you're moving is a virtual network instance, the following example shows how to update the target virtual network resource `parameters` instances in the template:
```json "parameters": {
Next, use the following steps to modify the *template.json* file. In the example
}, ```
- Be sure the subnets `serviceRuntimeSubnetId` and `appSubnetId` (defined in the service `networkProfile`) exist.
+ The following example shows how to make sure the `serviceRuntimeSubnetId` and `appSubnetId` subnets exist. The subnets are defined in the `networkProfile` service:
```json {
Next, use the following steps to modify the *template.json* file. In the example
} ```
-1. If any custom domain resources are configured, you need to create the CNAME records as described in [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md). Be sure the record name is expected for the new service name.
+1. If any custom domain resources are configured, create the CNAME records as described in [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md). Make sure the record name is expected for the new service name.
-1. Change all `relativePath` instances in the template `properties` for all app resources to `<default>`, as shown in the following example:
+1. The following example shows how to change all `relativePath` instances in the template `properties` for all app resources to `<default>`:
```json {
Next, use the following steps to modify the *template.json* file. In the example
} ```
- After the app is created, it uses a default banner application. You'LL need to deploy the JAR files again using the Azure CLI. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section below.
+ After the app is created, it uses a default banner application. Deploy the JAR files again using the Azure CLI. For more information, see the [Configure the new Azure Spring Apps service instance](#configure-the-new-azure-spring-apps-service-instance) section of this article.
-1. If service binding was used and you want to import it to the new service instance, add the `key` property for the target bound resource. In the following example, a bound MySQL database would be included:
+1. If service binding was used and you want to import it to the new service instance, add the `key` property for the target bound resource. In the following example, a bound MySQL database is included:
```json {
After you modify the template, use the following steps to deploy the template an
:::image type="content" source="media/how-to-move-across-regions/search-deploy-template.png" alt-text="Screenshot of Azure portal showing search results." lightbox="media/how-to-move-across-regions/search-deploy-template.png" border="true"::: 1. Under **Services**, select **Deploy a custom template**.
-1. Go to the **Select a template** tab, then select **Build your own template in the editor**.
-1. In the template editor, paste in the *template.json* file you modified earlier, then select **Save**.
+1. Go to the **Select a template** tab, and then select **Build your own template in the editor**.
+1. In the template editor, paste in the *template.json* file you modified earlier, and then select **Save**.
1. In the **Basics** tab, fill in the following information: - The target subscription.
After you modify the template, use the following steps to deploy the template an
- The target region. - Any other parameters required for the template.
- :::image type="content" source="media/how-to-move-across-regions/deploy-template.png" alt-text="Screenshot of Azure portal showing 'Custom deployment' pane.":::
+ :::image type="content" source="media/how-to-move-across-regions/deploy-template.png" alt-text="Screenshot of Azure portal showing the Custom deployment pane." lightbox="media/how-to-move-across-regions/deploy-template.png" :::
1. Select **Review + create** to create the target service instance.
-1. Wait until the template has deployed successfully. If the deployment fails, select **Deployment details** to view the failure reason, then update the template or configurations accordingly.
+1. Wait until the template has deployed successfully. If the deployment fails, select **Deployment details** to view the reason it failed, and then update the template or configurations accordingly.
### [Azure CLI](#tab/azure-cli)
-After you modify the template, use the following command to deploy the custom template and create the new resource.
+After you modify the template, use the following command to deploy the custom template and create the new resource:
```azurecli az login
az deployment group create \
--parameters <param-name-1>=<param-value-1> ```
-Wait until the template has deployed successfully. If the deployment fails, view the deployment details with the command `az deployment group list`, then update the template or configurations accordingly.
+Wait until the template has deployed successfully. If the deployment fails, view the deployment details with the command `az deployment group list`, and then update the template or configurations accordingly.
Wait until the template has deployed successfully. If the deployment fails, view
Some features aren't exported to the template, or can't be imported with a template. You must manually set up some Azure Spring Apps items on the new instance after the template deployment completes successfully. The following guidelines describe these requirements: - The JAR files for the previous service aren't deployed directly to the new service instance. To deploy all apps, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md). If there's no active deployment configured automatically, you must configure a production deployment. For more information, see [Set up a staging environment in Azure Spring Apps](how-to-staging-environment.md).-- Config Server won't be imported automatically. To set up Config Server on your new instance, see [Set up a Spring Cloud Config Server instance for your service](how-to-config-server.md).-- Managed identity will be created automatically for the new service instance, but the object ID will be different from the previous instance. For managed identity to work in the new service instance, follow the instructions in [How to enable system-assigned managed identity for applications in Azure Spring Apps](how-to-enable-system-assigned-managed-identity.md).-- For Monitoring -> Metrics, see [Metrics for Azure Spring Apps](concept-metrics.md). To avoid mixing the data, we recommend that you create a new Log Analytics instance to collect the new data. You should also create a new instance for other monitoring configurations.
+- Config Server won't be imported automatically. To set up Config Server on your new instance, see [Configure a managed Spring Cloud Config Server in Azure Spring Apps](how-to-config-server.md).
+- Managed identity is created automatically for the new service instance, but the object ID will be different from the previous instance. For managed identity to work in the new service instance, follow the instructions in [Enable system-assigned managed identity for an application in Azure Spring Apps](how-to-enable-system-assigned-managed-identity.md).
+- For Monitoring -> Metrics, see [Metrics for Azure Spring Apps](concept-metrics.md). To avoid mixing the data, create a new Log Analytics instance to collect the new data. You should also create a new instance for other monitoring configurations.
- For Monitoring -> Diagnostic settings and logs, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).-- For Monitoring -> Application Insights, see [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
+- For Monitoring -> Application Insights, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
## Next steps
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
| Destination Endpoint | Port | Use | Note | |-||-|--|
-| \*:1194 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
| \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. | | \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | | | \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
You can download a blob by using any of the following methods: - Blob.[download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download)-- Blob.[downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1)-- Blob.[downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile)
+- Blob.[downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1) (only available in Node.js runtime)
+- Blob.[downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) (only available in Node.js runtime)
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets
## Download to a file path
-The following example downloads a blob by using a file path with the [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) method:
+The following example downloads a blob by using a file path with the [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) method. This method is only available in the Node.js runtime:
```javascript async function downloadBlobToFile(containerClient, blobName, fileNameWithPath) {
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Title: Configure cross-tenant customer-managed keys for an existing storage account (preview)
+ Title: Configure cross-tenant customer-managed keys for an existing storage account
-description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account resides (preview). Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider.
+description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account resides. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider.
Previously updated : 10/28/2022 Last updated : 10/31/2022
-# Configure cross-tenant customer-managed keys for an existing storage account (preview)
+# Configure cross-tenant customer-managed keys for an existing storage account
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM). This article shows how to configure encryption with customer-managed keys for an existing storage account. In the cross-tenant scenario, the storage account resides in a tenant managed by an ISV, while the key used for encryption of that storage account resides in a key vault in a tenant that is managed by the customer.
-To learn how to configure customer-managed keys for a new storage account, see [Configure cross-tenant customer-managed keys for a new storage account (preview)](customer-managed-keys-configure-cross-tenant-new-account.md).
-
-## About the preview
-
-To use the preview, you must register for the Azure Active Directory federated client identity feature in the ISV's tenant. Follow these instructions to register with PowerShell or Azure CLI:
-
-### [PowerShell](#tab/powershell-preview)
-
-To register with PowerShell, call the **Register-AzProviderFeature** command.
-
-```azurepowershell
-Register-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName FederatedClientIdentity
-```
-
-To check the status of your registration with PowerShell, call the **Get-AzProviderFeature** command.
-
-```azurepowershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName FederatedClientIdentity
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with PowerShell, call the **Register-AzResourceProvider** command.
-
-```azurepowershell
-Register-AzResourceProvider -ProviderNamespace 'Microsoft.Storage'
-```
-
-### [Azure CLI](#tab/azure-cli-preview)
-
-To register with Azure CLI, call the **az feature register** command.
-
-```azurecli
-az feature register --namespace Microsoft.Storage \
- --name FederatedClientIdentity
-```
-
-To check the status of your registration with Azure CLI, call the **az feature show** command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage \
- --name FederatedClientIdentity
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with Azure CLI, call the **az provider register command**.
-
-```azurecli
-az provider register --namespace 'Microsoft.Storage'
-```
---
-> [!IMPORTANT]
-> Using cross-tenant customer-managed keys with Azure Storage encryption is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+To learn how to configure customer-managed keys for a new storage account, see [Configure cross-tenant customer-managed keys for a new storage account](customer-managed-keys-configure-cross-tenant-new-account.md).
[!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../../includes/active-directory-msi-cross-tenant-cmk-overview.md)]
After you've specified the key from the key vault in the customer's tenant, the
### [PowerShell](#tab/azure-powershell)
-To configure cross-tenant customer-managed keys for a new storage account with PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
+To configure cross-tenant customer-managed keys for a new storage account with PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage), version 5.1.0 or later. This module is installed with the [Az PowerShell module](https://www.powershellgallery.com/packages/Az), version 9.1.0 or later.
Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
Set-AzStorageAccount -ResourceGroupName $isvRgName `
### [Azure CLI](#tab/azure-cli)
-N/A
+To configure cross-tenant customer-managed keys for an existing storage account with Azure CLI, first install the Azure CLI, version 2.42.0 or later. For more information about installing Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurecli
+accountName="<storage-account>"
+kvUri="<key-vault-uri>"
+keyName="<key-name>"
+multiTenantAppId="<multi-tenant-app-id>" # appId value from multi-tenant app
+
+# Get the resource ID for the user-assigned managed identity.
+identityResourceId=$(az identity show --name $userIdentityName \
+ --resource-group $isvRgName \
+ --query id \
+ --output tsv)
+
+az storage account update --name $accountName \
+ --resource-group $isvRgName \
+ --identity-type SystemAssigned,UserAssigned \
+ --user-identity-id $identityResourceId \
+ --encryption-key-vault $kvUri \
+ --encryption-key-name $keyName \
+ --encryption-key-source Microsoft.Keyvault \
+ --key-vault-user-identity-id $identityResourceId \
+ --key-vault-federated-client-id $multiTenantAppId
+```
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Title: Configure cross-tenant customer-managed keys for a new storage account (preview)
+ Title: Configure cross-tenant customer-managed keys for a new storage account
-description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account will be created (preview). Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider.
+description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account will be created. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider.
Previously updated : 10/28/2022 Last updated : 10/31/2022
-# Configure cross-tenant customer-managed keys for a new storage account (preview)
+# Configure cross-tenant customer-managed keys for a new storage account
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM). This article shows how to configure encryption with customer-managed keys at the time that you create a new storage account. In the cross-tenant scenario, the storage account resides in a tenant managed by an ISV, while the key used for encryption of that storage account resides in a key vault in a tenant that is managed by the customer.
-To learn how to configure customer-managed keys for an existing storage account, see [Configure cross-tenant customer-managed keys for an existing storage account (preview)](customer-managed-keys-configure-cross-tenant-existing-account.md).
-
-## About the preview
-
-To use the preview, you must register for the Azure Active Directory federated client identity feature in the ISV's tenant. Follow these instructions to register with PowerShell or Azure CLI:
-
-### [PowerShell](#tab/powershell-preview)
-
-To register with PowerShell, call the **Register-AzProviderFeature** command.
-
-```azurepowershell
-Register-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName FederatedClientIdentity
-```
-
-To check the status of your registration with PowerShell, call the **Get-AzProviderFeature** command.
-
-```azurepowershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName FederatedClientIdentity
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with PowerShell, call the **Register-AzResourceProvider** command.
-
-```azurepowershell
-Register-AzResourceProvider -ProviderNamespace 'Microsoft.Storage'
-```
-
-### [Azure CLI](#tab/azure-cli-preview)
-
-To register with Azure CLI, call the **az feature register** command.
-
-```azurecli
-az feature register --namespace Microsoft.Storage \
- --name FederatedClientIdentity
-```
-
-To check the status of your registration with Azure CLI, call the **az feature show** command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage \
- --name FederatedClientIdentity
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with Azure CLI, call the **az provider register command**.
-
-```azurecli
-az provider register --namespace 'Microsoft.Storage'
-```
---
-> [!IMPORTANT]
-> Using cross-tenant customer-managed keys with Azure Storage encryption is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+To learn how to configure customer-managed keys for an existing storage account, see [Configure cross-tenant customer-managed keys for an existing storage account](customer-managed-keys-configure-cross-tenant-existing-account.md).
[!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../../includes/active-directory-msi-cross-tenant-cmk-overview.md)]
To configure cross-tenant customer-managed keys for a new storage account in the
### [PowerShell](#tab/azure-powershell)
-To configure cross-tenant customer-managed keys for a new storage account in PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
+To configure cross-tenant customer-managed keys for a new storage account in PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage), version 5.1.0 or later. This module is installed with the [Az PowerShell module](https://www.powershellgallery.com/packages/Az), version 9.1.0 or later.
Next, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
New-AzStorageAccount -ResourceGroupName $rgName `
### [Azure CLI](#tab/azure-cli)
-To configure cross-tenant customer-managed keys for a new storage account with Azure CLI, first install the [storage-preview](https://github.com/Azure/azure-cli-extensions/tree/main/src/storage-preview) extension. For more information about installing Azure CLI extensions, see [How to install and manage Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+To configure cross-tenant customer-managed keys for a new storage account with Azure CLI, first install the Azure CLI, version 2.42.0 or later. For more information about installing Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
Next, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 09/30/2022 Last updated : 10/31/2022
You can configure customer-managed keys with the key vault and storage account i
To learn how to configure Azure Storage encryption with customer-managed keys when the key vault and storage account are in different Azure AD tenants, see one of the following articles: -- [Configure cross-tenant customer-managed keys for a new storage account (preview)](customer-managed-keys-configure-cross-tenant-new-account.md)-- [Configure cross-tenant customer-managed keys for an existing storage account (preview)](customer-managed-keys-configure-cross-tenant-existing-account.md)
+- [Configure cross-tenant customer-managed keys for a new storage account](customer-managed-keys-configure-cross-tenant-new-account.md)
+- [Configure cross-tenant customer-managed keys for an existing storage account](customer-managed-keys-configure-cross-tenant-existing-account.md)
When you enable or disable customer-managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account doesn't need to be re-encrypted.
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
Previously updated : 07/12/2022 Last updated : 10/31/2022
For information about encryption and key management for Azure managed disks, see
Data in a new storage account is encrypted with Microsoft-managed keys by default. You can continue to rely on Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. If you choose to manage encryption with your own keys, you have two options. You can use either type of key management, or both: -- You can specify a *customer-managed key* to use for encrypting and decrypting data in Blob Storage and in Azure Files.<sup>1,2</sup> Customer-managed keys must be stored in Azure Key Vault or Azure Key Vault Managed Hardware Security Model (HSM) (preview). For more information about customer-managed keys, see [Use customer-managed keys for Azure Storage encryption](./customer-managed-keys-overview.md).
+- You can specify a *customer-managed key* to use for encrypting and decrypting data in Blob Storage and in Azure Files.<sup>1,2</sup> Customer-managed keys must be stored in Azure Key Vault or Azure Key Vault Managed Hardware Security Model (HSM). For more information about customer-managed keys, see [Use customer-managed keys for Azure Storage encryption](./customer-managed-keys-overview.md).
- You can specify a *customer-provided key* on Blob Storage operations. A client making a read or write request against Blob Storage can include an encryption key on the request for granular control over how blob data is encrypted and decrypted. For more information about customer-provided keys, see [Provide an encryption key on a request to Blob Storage](../blobs/encryption-customer-provided-keys.md). By default, a storage account is encrypted with a key that is scoped to the entire storage account. Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. Encryption scopes can use either Microsoft-managed keys or customer-managed keys. For more information about encryption scopes, see [Encryption scopes for Blob storage](../blobs/encryption-scope-overview.md).
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-download.md
You can download specific blobs by using complete file names, partial names with
#### Specify multiple complete blob names
-Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual blob names by using a semicolin (`;`).
+Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual blob names by using a semicolon (`;`).
**Syntax**
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
Title: Configuring Azure File Sync network endpoints | Microsoft Docs
+ Title: Configuring Azure File Sync network endpoints
description: Learn how to configure Azure File Sync network endpoints. Previously updated : 05/24/2021 Last updated : 11/01/2022
Azure Files and Azure File Sync provide two main types of endpoints for accessin
- Public endpoints, which have a public IP address and can be accessed from anywhere in the world. - Private endpoints, which exist within a virtual network and have a private IP address from within the address space of that virtual network.
-For both Azure Files and Azure File Sync, the Azure management objects, the storage account and the Storage Sync Service respectively, control both the public and private endpoints. The storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. The Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
+For both Azure Files and Azure File Sync, the Azure management objects, the storage account and the Storage Sync Service respectively, control both the public and private endpoints. The storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. The Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
This article focuses on how to configure the networking endpoints for both Azure Files and Azure File Sync. To learn more about how to configure networking endpoints for accessing Azure file shares directly, rather than caching on-premises with Azure File Sync, see [Configuring Azure Files network endpoints](../files/storage-files-networking-endpoints.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
Additionally:
- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Create the private endpoints
-When you creating a private endpoint for an Azure resource, the following resources are deployed:
+When you are creating a private endpoint for an Azure resource, the following resources are deployed:
- **A private endpoint**: An Azure resource representing either the private endpoint for the storage account or the Storage Sync Service. You can think of this as a resource that connects your Azure resource and a network interface. - **A network interface (NIC)**: The network interface that maintains a private IP address within the specified virtual network/subnet. This is the exact same resource that gets deployed when you deploy a virtual machine, however instead of being assigned to a VM, it's owned by the private endpoint.
The **Configuration** blade allows you to select the specific virtual network an
Click **Review + create** to create the private endpoint.
-You can test that your private endpoint has been setup correctly by running the following commands from PowerShell.
+You can test that your private endpoint has been set up correctly by running the following commands from PowerShell.
```powershell $privateEndpointResourceGroupName = "<your-private-endpoint-resource-group>"
if ($null -eq $dnsZone) {
-ErrorAction Stop } ```
-Now that you have a reference to the private DNS zone, you must create an A records for your Storage Sync Service.
+Now that you have a reference to the private DNS zone, you must create an A record for your Storage Sync Service.
```powershell $privateEndpointIpFqdnMappings = $privateEndpoint | `
then
fi ```
-Now that you have a reference to the private DNS zone, you must create an A records for your Storage Sync Service.
+Now that you have a reference to the private DNS zone, you must create an A record for your Storage Sync Service.
```bash privateEndpointNIC=$(az network private-endpoint show \
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Previously updated : 10/20/2022 Last updated : 11/01/2022
The following table contains the Azure RBAC permissions related to this configur
| | Read | Read | | | Write | Write |
-## Supported permissions
+## Supported Windows ACLs
-Azure Files supports the full set of basic and advanced Windows ACLs. You can view and configure Windows ACLs on directories and files in an Azure file share by connecting to the share and then using Windows File Explorer, running the Windows [icacls](/windows-server/administration/windows-commands/icacls) command, or the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) command.
+Azure Files supports the full set of basic and advanced Windows ACLs.
-To configure ACLs with superuser permissions, you must mount the share by using your storage account key from your domain-joined VM. Follow the instructions in the next section to mount an Azure file share from the command prompt and to configure Windows ACLs.
+|Users|Definition|
+|||
+|`BUILTIN\Administrators`|Built-in security group representing administrators of the file server. This group is empty, and no one can be added to it.
+|`BUILTIN\Users`|Built-in security group representing users of the file server. It includes `NT AUTHORITY\Authenticated Users` by default. For a traditional file server, you can configure the membership definition per server. For Azure Files, there isn't a hosting server, hence `BUILTIN\Users` includes the same set of users as `NT AUTHORITY\Authenticated Users`.|
+|`NT AUTHORITY\SYSTEM`|The service account of the operating system of the file server. Such service account doesn't apply in Azure Files context. It is included in the root directory to be consistent with Windows Files Server experience for hybrid scenarios.|
+|`NT AUTHORITY\Authenticated Users`|All users in AD that can get a valid Kerberos token.|
+|`CREATOR OWNER`|Each object either directory or file has an owner for that object. If there are ACLs assigned to `CREATOR OWNER` on that object, then the user that is the owner of this object has the permissions to the object defined by the ACL.|
The following permissions are included on the root directory of a file share:
The following permissions are included on the root directory of a file share:
- `NT AUTHORITY\SYSTEM:(F)` - `CREATOR OWNER:(OI)(CI)(IO)(F)`
-|Users|Definition|
-|||
-|`BUILTIN\Administrators`|Built-in security group representing administrators of the file server. This group is empty, and no one can be added to it.
-|`BUILTIN\Users`|Built-in security group representing users of the file server. It includes `NT AUTHORITY\Authenticated Users` by default. For a traditional file server, you can configure the membership definition per server. For Azure Files, there isn't a hosting server, hence `BUILTIN\Users` includes the same set of users as `NT AUTHORITY\Authenticated Users`.|
-|`NT AUTHORITY\SYSTEM`|The service account of the operating system of the file server. Such service account doesn't apply in Azure Files context. It is included in the root directory to be consistent with Windows Files Server experience for hybrid scenarios.|
-|`NT AUTHORITY\Authenticated Users`|All users in AD that can get a valid Kerberos token.|
-|`CREATOR OWNER`|Each object either directory or file has an owner for that object. If there are ACLs assigned to `CREATOR OWNER` on that object, then the user that is the owner of this object has the permissions to the object defined by the ACL.|
+## Mount the file share using your storage account key
-## Connect to the Azure file share
+Before you configure Windows ACLs, you must first mount the file share by using your storage account key. To do this, log into a domain-joined device, open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet.
-Run the script below from a normal (not elevated) PowerShell terminal to connect to the Azure file share using the storage account key and map the share to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
+It's important that you use the `net use` Windows command to mount the share at this stage and not PowerShell. If you use PowerShell to mount the share, then the share won't be visible to Windows File Explorer or cmd.exe, and you'll have difficulty configuring Windows ACLs.
> [!NOTE] > You might see the **Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share level and the file/directory level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or directory can assign permissions on those new files or directories without using the storage account key. All other file/directory permission assignment requires connecting to the share using the storage account key first.
-```powershell
-$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
-if ($connectTestResult.TcpTestSucceeded) {
- cmd.exe /C "cmdkey /add:`"<storage-account-name>.file.core.windows.net`" /user:`"localhost\<storage-account-name>`" /pass:`"<storage-account-key>`""
- New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\<file-share-name>"
-} else {
- Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."
-}
```-
-If you experience issues connecting to Azure Files on Windows, refer to [this troubleshooting tool](https://azure.microsoft.com/blog/new-troubleshooting-diagnostics-for-azure-files-mounting-errors-on-windows/).
+net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:localhost\<YourStorageAccountName> <YourStorageAccountKey>
+```
## Configure Windows ACLs
-After you've connected to your Azure file share, you must configure the Windows ACLs. You can do this using either Windows File Explorer or [icacls](/windows-server/administration/windows-commands/icacls).
+After you've connected to your Azure file share using the storage account key, you must configure the Windows ACLs. You can do this using either [icacls](#configure-windows-acls-with-icacls) or [Windows File Explorer](#configure-windows-acls-with-windows-file-explorer). You can also use the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command.
-If you have directories or files in on-premises file servers with Windows DACLs configured against the AD DS identities, you can copy it over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format.
+If you have directories or files in on-premises file servers with Windows ACLs configured against the AD DS identities, you can copy them over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format.
### Configure Windows ACLs with icacls
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 10/13/2022 Last updated : 10/31/2022
az storage account update -n <storage-account-name> -g <resource-group-name> --e
By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these instructions.
-The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a machine that's domain-joined to the Azure AD DS domain.
+The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a client machine that's domain-joined to the Azure AD DS domain.
> [!IMPORTANT]
-> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
+> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1 from a client machine that's domain-joined to the Azure AD DS domain. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
-As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), execute the following PowerShell commands.
+Log into the domain-joined client machine as an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions). Open a normal (non-elevated) PowerShell session and execute the following commands.
```powershell # 1. Find the service account in your managed domain that represents the storage account.
Get-ADUser $userObject -properties KerberosEncryptionType
[!INCLUDE [storage-files-aad-permissions-and-mounting](../../../includes/storage-files-aad-permissions-and-mounting.md)]
-You've now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file share, follow the instructions in [Assign share-level permissions to an identity](#assign-share-level-permissions-to-an-identity) and [Configure Windows ACLs](#configure-windows-acls).
+You've now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file share, follow the instructions in [Assign share-level permissions to an Azure AD identity](#assign-share-level-permissions-to-an-azure-ad-identity) and [Configure Windows ACLs](#configure-windows-acls).
## Next steps
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
Validate that permissions are configured correctly:
- **Active Directory Domain Services (AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-ad-ds-assign-permissions.md). Share-level permission assignments are supported for groups and users that have been synced from AD DS to Azure Active Directory (Azure AD) using Azure AD Connect sync or Azure AD Connect cloud sync. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.-- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions-to-an-identity).
+- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions to an Azure AD identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions-to-an-azure-ad-identity).
<a id="error53-67-87"></a> ## Error 53, Error 67, or Error 87 when you mount or unmount an Azure file share
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-architecture.md
Previously updated : 04/15/2020 Last updated : 11/01/2022 +
-# Azure Synapse SQL architecture
+# Azure Synapse SQL architecture
-This article describes the architecture components of Synapse SQL.
+This article describes the architecture components of Synapse SQL. It also explains how Azure Synapse SQL combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability.
## Synapse SQL architecture components
-Synapse SQL leverages a scale out architecture to distribute computational processing of data across multiple nodes. Compute is separate from storage, which enables you to scale compute independently of the data in your system.
+Synapse SQL uses a scale-out architecture to distribute computational processing of data across multiple nodes. Compute is separate from storage, which enables you to scale compute independently of the data in your system.
-For dedicated SQL pool, the unit of scale is an abstraction of compute power that is known as a [data warehouse unit](resource-consumption-models.md).
+For dedicated SQL pool, the unit of scale is an abstraction of compute power that is known as a [data warehouse unit](resource-consumption-models.md).
-For serverless SQL pool, being serverless, scaling is done automatically to accommodate query resource requirements. As topology changes over time by adding, removing nodes or failovers, it adapts to changes and makes sure your query has enough resources and finishes successfully. For example, the image below shows serverless SQL pool utilizing 4 compute nodes to execute a query.
+For serverless SQL pool, being serverless, scaling is done automatically to accommodate query resource requirements. As topology changes over time by adding, removing nodes or failovers, it adapts to changes and makes sure your query has enough resources and finishes successfully. For example, the following image shows serverless SQL pool using four compute nodes to execute a query.
-![Synapse SQL architecture](./media//overview-architecture/sql-architecture.png)
-Synapse SQL uses a node-based architecture. Applications connect and issue T-SQL commands to a Control node, which is the single point of entry for Synapse SQL.
+Synapse SQL uses a node-based architecture. Applications connect and issue T-SQL commands to a Control node, which is the single point of entry for Synapse SQL.
-The Azure Synapse SQL Control node utilizes a distributed query engine to optimize queries for parallel processing, and then passes operations to Compute nodes to do their work in parallel.
+The Azure Synapse SQL Control node utilizes a distributed query engine to optimize queries for parallel processing, and then passes operations to Compute nodes to do their work in parallel.
The serverless SQL pool Control node utilizes Distributed Query Processing (DQP) engine to optimize and orchestrate distributed execution of user query by splitting it into smaller queries that will be executed on Compute nodes. Each small query is called task and represents distributed execution unit. It reads file(s) from storage, joins results from other tasks, groups, or orders data retrieved from other tasks.
With decoupled storage and compute, when using Synapse SQL one can benefit from
## Azure Storage
-Synapse SQL leverages Azure Storage to keep your user data safe. Since your data is stored and managed by Azure Storage, there is a separate charge for your storage consumption.
+Synapse SQL uses Azure Storage to keep your user data safe. Since your data is stored and managed by Azure Storage, there's a separate charge for your storage consumption.
Serverless SQL pool allows you to query your data lake files, while dedicated SQL pool allows you to query and ingest data from your data lake files. When data is ingested into dedicated SQL pool, the data is sharded into **distributions** to optimize the performance of the system. You can choose which sharding pattern to use to distribute the data when you define the table. These sharding patterns are supported:
Serverless SQL pool allows you to query your data lake files, while dedicated SQ
## Control node
-The Control node is the brain of the architecture. It is the front end that interacts with all applications and connections.
+The Control node is the brain of the architecture. It's the front end that interacts with all applications and connections.
In Synapse SQL, the distributed query engine runs on the Control node to optimize and coordinate parallel queries. When you submit a T-SQL query to dedicated SQL pool, the Control node transforms it into queries that run against each distribution in parallel.
In serverless SQL pool, the DQP engine runs on Control node to optimize and coor
## Compute nodes
-The Compute nodes provide the computational power.
+The Compute nodes provide the computational power.
In dedicated SQL pool, distributions map to Compute nodes for processing. As you pay for more compute resources, pool remaps the distributions to the available Compute nodes. The number of compute nodes ranges from 1 to 60, and is determined by the service level for the dedicated SQL pool. Each Compute node has a node ID that is visible in system views. You can see the Compute node ID by looking for the node_id column in system views whose names begin with sys.pdw_nodes. For a list of these system views, see [Synapse SQL system views](/sql/relational-databases/system-catalog-views/sql-data-warehouse-and-parallel-data-warehouse-catalog-views?view=azure-sqldw-latest&preserve-view=true).
Data Movement Service (DMS) is the data transport technology in dedicated SQL po
## Distributions
-A distribution is the basic unit of storage and processing for parallel queries that run on distributed data in dedicated SQL pool. When dedicated SQL pool runs a query, the work is divided into 60 smaller queries that run in parallel.
+A distribution is the basic unit of storage and processing for parallel queries that run on distributed data in dedicated SQL pool. When dedicated SQL pool runs a query, the work is divided into 60 smaller queries that run in parallel.
-Each of the 60 smaller queries runs on one of the data distributions. Each Compute node manages one or more of the 60 distributions. A dedicated SQL pool with maximum compute resources has one distribution per Compute node. A dedicated SQL pool with minimum compute resources has all the distributions on one compute node.
+Each of the 60 smaller queries runs on one of the data distributions. Each Compute node manages one or more of the 60 distributions. A dedicated SQL pool with maximum compute resources has one distribution per Compute node. A dedicated SQL pool with minimum compute resources has all the distributions on one compute node.
## Hash-distributed tables
-A hash distributed table can deliver the highest query performance for joins and aggregations on large tables.
+
+A hash distributed table can deliver the highest query performance for joins and aggregations on large tables.
To shard data into a hash-distributed table, dedicated SQL pool uses a hash function to deterministically assign each row to one distribution. In the table definition, one of the columns is designated as the distribution column. The hash function uses the values in the distribution column to assign each row to a distribution.
-The following diagram illustrates how a full (non-distributed table) gets stored as a hash-distributed table.
+The following diagram illustrates how a full (non-distributed table) gets stored as a hash-distributed table.
-![Distributed table](media//overview-architecture/hash-distributed-table.png "Distributed table")
-* Each row belongs to one distribution.
-* A deterministic hash algorithm assigns each row to one distribution.
+* Each row belongs to one distribution.
+* A deterministic hash algorithm assigns each row to one distribution.
* The number of table rows per distribution varies as shown by the different sizes of tables. There are performance considerations for the selection of a distribution column, such as distinctness, data skew, and the types of queries that run on the system.
There are performance considerations for the selection of a distribution column,
A round-robin table is the simplest table to create and delivers fast performance when used as a staging table for loads.
-A round-robin distributed table distributes data evenly across the table but without any further optimization. A distribution is first chosen at random and then buffers of rows are assigned to distributions sequentially. It is quick to load data into a round-robin table, but query performance can often be better with hash distributed tables. Joins on round-robin tables require reshuffling data, which takes additional time.
+A round-robin distributed table distributes data evenly across the table but without any further optimization. A distribution is first chosen at random and then buffers of rows are assigned to distributions sequentially. It's quick to load data into a round-robin table, but query performance can often be better with hash distributed tables. Joins on round-robin tables require reshuffling data, which takes extra time.
## Replicated tables+ A replicated table provides the fastest query performance for small tables.
-A table that is replicated caches a full copy of the table on each compute node. So, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there is additional overhead that is incurred when writing data, which make large tables impractical.
+A table that is replicated caches a full copy of the table on each compute node. So, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there's extra overhead that is incurred when writing data, which make large tables impractical.
-The diagram below shows a replicated table that is cached on the first distribution on each compute node.
+The diagram below shows a replicated table that is cached on the first distribution on each compute node.
-![Replicated table](media/overview-architecture/replicated-table.png "Replicated table")
## Next steps
-Now that you know a bit about Synapse SQL, learn how to quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md) and [load sample data](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md). Or start [using serverless SQL pool](../quickstart-sql-on-demand.md). If you are new to Azure, you may find the [Azure glossary](../../azure-glossary-cloud-terminology.md) helpful as you encounter new terminology.
+Now that you know a bit about Synapse SQL, learn how to quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md) and [load sample data](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md). Or start [using serverless SQL pool](../quickstart-sql-on-demand.md). If you're new to Azure, you may find the [Azure glossary](../../azure-glossary-cloud-terminology.md) helpful as you encounter new terminology.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The error "Invalid object name 'table name'" indicates that you're using an obje
- The table has some column types that can't be represented in serverless SQL pool. - The table has a format that isn't supported in serverless SQL pool. Examples are Avro or ORC.
+### String or binary data would be truncated
+
+This error happens if the length of your string or binary column type (for example `VARCHAR`, `VARBINARY`, or `NVARCHAR`) is shorter than the actual size of data that you are reading. You can fix this error by increasing the length of the column type:
+- If your string column is defined as the `VARCHAR(32)` type and the text is 60 characters, use the `VARCHAR(60)` type (or longer) in your column schema.
+- If you are using the schema inference (withtout the `WITH` schema), all string columns are automatically defined as the `VARCHAR(8000)` type. If you are getting this error, explicitly define the schema in a `WITH` clause with the larger `VARCHAR(MAX)` column type to resolve this error.
+- If your table is in the Lake database, try to increase the string column size in the Spark pool.
+- Try to `SET ANSI_WARNINGS OFF` to enable serverless SQL pool to automatically truncate the VARCHAR values, if this will not impact your functionalities.
+ ### Unclosed quotation mark after the character string In rare cases, where you use the LIKE operator on a string column or some comparison with the string literals, you might get the following error:
There are some limitations and known issues that you might see in Delta Lake sup
- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters. - Delta Lake support isn't available in dedicated SQL pools. Make sure that you use serverless SQL pools to query Delta Lake files.
+### Column rename in Delta table is not supported
+
+The serverless SQL pool does not support querying Delta Lake tables with the [renamed columns](https://docs.delta.io/latest/delta-batch.html#rename-columns). Serverless SQL pool cannot read data from the renamed column.
+ ### JSON text isn't properly formatted This error indicates that serverless SQL pool can't read the Delta Lake transaction log. You'll probably see the following error:
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure
description: Supported features for Microsoft Teams on Azure Virtual Desktop. Previously updated : 10/21/2022 Last updated : 11/01/2022
The following table lists whether the Windows Desktop client or macOS client sup
|Communication Access Real-time Translation (CART) transcriptions|Yes|Yes| |Give and take control |Yes|No| |Multiwindow|Yes|Yes|
-|Background blur|Yes|No|
-|Background images|Yes|No|
+|Background blur|Yes|Yes|
+|Background images|Yes|Yes|
|Screen share and video together|Yes|Yes| |Secondary ringer|Yes|No| |Dynamic e911|Yes|Yes|
The following table lists the minimum required versions for each Teams feature.
|CART transcriptions|1.2.2322 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Give and take control |1.2.2924 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Multiwindow|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|1.5.00.11865 and later|
-|Background blur|1.2.3004 and later|Not supported|1.0.2006.11001 and later|1.5.00.11865 and later|
-|Background images|1.2.3004 and later|Not supported|1.0.2006.11001 and later|1.5.00.11865 and later|
+|Background blur|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later|
+|Background images|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later|
|Screen share and video together|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Secondary ringer|1.2.3004 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Dynamic e911|1.2.2600 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
Scheduled Events provides events in the following use cases:
Scheduled events are delivered to: - Standalone Virtual Machines.-- All the VMs in a cloud service.
+- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml).
- All the VMs in an availability set. - All the VMs in an availability zone. - All the VMs in a scale set placement group.
In the case where there are scheduled events, the response contains an array of
| NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |
-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
+| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li> `0`: The event will not interrupt the VM or impact its availability (eg. update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
### Event Scheduling Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property.
The `DocumentIncarnation` is changing every time there is new information in `Ev
"NotBefore": "Mon, 11 Apr 2022 22:26:58 GMT", "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.", "EventSource": "Platform",
- "DurationInSeconds": -1
+ "DurationInSeconds": 5
} ] }
The `DocumentIncarnation` is changing every time there is new information in `Ev
"NotBefore": "", "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.", "EventSource": "Platform",
- "DurationInSeconds": -1
+ "DurationInSeconds": 5
} ] }
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
Scheduled Events provides events in the following use cases:
Scheduled events are delivered to: - Standalone Virtual Machines.-- All the VMs in a cloud service.
+- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml).
- All the VMs in an availability set. - All the VMs in an availability zone. - All the VMs in a scale set placement group.
In the case where there are scheduled events, the response contains an array of
| NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |
-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
+| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`0`: The event will not interrupt the VM or impact its availability (eg. update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
### Event scheduling Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property.
The `DocumentIncarnation` is changing every time there is new information in `Ev
"NotBefore": "Mon, 11 Apr 2022 22:26:58 GMT", "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.", "EventSource": "Platform",
- "DurationInSeconds": -1
+ "DurationInSeconds": 5
} ] }
The `DocumentIncarnation` is changing every time there is new information in `Ev
"NotBefore": "", "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.", "EventSource": "Platform",
- "DurationInSeconds": -1
+ "DurationInSeconds": 5
} ] }
virtual-machines Configure Netweaver Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-netweaver-azure-monitor-sap-solutions.md
In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. You can use SAP NetWeaver with both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
-The SAP start service provides multiple services, including monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use **SAPControl**, which is a SOAP web service interface that exposes these capabilities. The **SAPControl** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use Azure Monitor for SAP solutions with NetWeaver.
+User can select between the two connection types when configuring SAP Netweaver provider to collect information from SAP system. Metrics are collected by using
+
+- **SAP Control** - The SAP start service provides multiple services, including monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use **SAP Control**, which is a SOAP web service interface that exposes these capabilities. The **SAP Control** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use Azure Monitor for SAP solutions with NetWeaver.
+- **SAP RFC** - Azure Monitor for SAP solutions also provides ability to collect additional information from the SAP system using Standard SAP RFC. It's available only as part of Azure Monitor for SAP solution and not available in the classic version.
+
+You can collect below metric using SAP NetWeaver Provider
+
+- SAP system and application server availability (for example Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAP Control)
+- Work process usage statistics and trends (SAP Control)
+- Enqueue Lock statistics and trends (SAP Control)
+- Queue usage statistics and trends (SAP Control)
+- SMON Metrics (**transaction code - /SDF/SMON**) (RFC)
+- SWNC Workload, Memory, Transaction, User, RFC Usage (**transaction code - St03n**) (RFC)
+- Short Dumps (**transaction code - ST22**) (RFC)
+- Object Lock (**transaction code - SM12**) (RFC)
+- Failed Updates (**transaction code - SM13**) (RFC)
+- System Logs Analysis (**transaction code - SM21**) (RFC)
+- Batch Jobs Statistics (**transaction code - SM37**) (RFC)
+- Outbound Queues (**transaction code - SMQ1**) (RFC)
+- Inbound Queues (**transaction code - SMQ2**) (RFC)
+- Transactional RFC (**transaction code - SM59**) (RFC)
+- STMS Change Transport System Metrics (**transaction code - STMS**) (RFC)
+ ## Prerequisites
The SAP start service provides multiple services, including monitoring the SAP s
To configure the NetWeaver provider for the current Azure Monitor for SAP solutions version, you'll need to:
-1. [Unprotect methods for metrics](#unprotect-methods-for-metrics)
-1. [Check that the rules have updated properly](#check-updated-rules)
-1. [Set up RFC metrics](#set-up-rfc-metrics)
-1. [Add the NetWeaver provider](#add-netweaver-provider)
+1. [Prerequisite - Unprotect methods for metrics](#prerequisite-unprotect-methods-for-metrics)
+1. [Prerequisite to enable RFC metrics ](#prerequisite-to-enable-rfc-metrics)
+1. [Add the NetWeaver provider](#adding-netweaver-provider)
-### Unprotect methods for metrics
+Refer to troubleshooting section to resolve any issue faced while adding the SAP NetWeaver Provider.
-To fetch specific metrics, you need to unprotect some methods in each SAP system:
+### Prerequisite unprotect methods for metrics
-1. Open an SAP GUI connection to the SAP server.
+This step is **mandatory** when configuring SAP NetWeaver Provider. To fetch specific metrics, you need to unprotect some methods in each SAP instance:
+1. Open an SAP GUI connection to the SAP server.
1. Sign in with an administrative account.- 1. Execute transaction **RZ10**.-
-1. Select the appropriate profile (*DEFAULT.PFL*).
-
+1. Select the appropriate profile (recommended Instance Profile - no restart needed, *DEFAULT.PFL* requires restart of SAP system).
1. Select **Extended Maintenance** &gt; **Change**.- 1. Select the profile parameter `service/protectedwebmethods`.-
-1. Change the value to:
-
- ```text
- SDEFAULT -GetQueueStatistic -ABAPGetWPTable -EnqGetStatistic -GetProcessList
- ```
-
+1. Change the value to:
+ ```Value field
+ SDEFAULT -GetQueueStatistic -ABAPGetWPTable -EnqGetStatistic -GetProcessList ```
1. Select **Copy**.- 1. Select **Profile** &gt; **Save** to save the changes.- 1. Restart the **SAPStartSRV** service on each instance in the SAP system. Restarting the services doesn't restart the entire system. This process only restarts **SAPStartSRV** (on Windows) or the daemon process (in Unix or Linux). 1. On Windows systems, use the SAP Microsoft Management Console (MMC) or SAP Management Console (MC) to restart the service. Right-click each instance. Then, choose **All Tasks** &gt; **Restart Service**. 1. On Linux systems, use the following commands to restart the host. Replace `<instance number` with your SAP system's instance number.
- ```bash
- RestartService
- ```
-
- ```bash
+ ```Command to restart the service
sapcontrol -nr <instance number> -function RestartService ```
-You must restart the **SAPStartSRV** service on each instance of your SAP system to unprotect the **SAPControl** web methods. The read-only SOAP API is required for the NetWeaver provider to fetch metric data from your SAP system. If you don't unprotect these methods, there will be empty or missing visualizations in the NetWeaver metric workbook.
-### Check updated rules
+### Prerequisite to enable RFC metrics
-After you restart the SAP service, check that your updated rules are applied to each instance.
+For AS ABAP applications only, you can set up the NetWeaver RFC metrics. This step is **mandatory** when connection type selected is **SOAP+RFC**. Below steps need to be performed as a pre-requisite to enable RFC
-1. Log in to the SAP system as `sidadm`.
+1. **Create or upload role** in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses least privilege access.Download and unzips [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/files/8710130/Z_AMS_NETWEAVER_MONITORING.zip).
+ 1. Sign in to your SAP system.
+ 1. Use the transaction code **PFCG** &gt; select on **Role Upload** in the menu.
+ 1. Upload the **Z_AMS_NETWEAVER_MONITORING.SAP** file from the ZIP file.
+ 1. Select **Execute** to generate the role. (ensure the profile is also generated as part of the role upload)
+
+2. **Create and authorize a new RFC user**.
+ 1. Create an RFC user.
+ 1. Assign the role **Z_AMS_NETWEAVER_MONITORING** to the user. It's the role that you uploaded in the previous section.
+
+3. **Enable SICF Services** to access the RFC via the SAP Internet Communication Framework (ICF)
+ 1. Go to transaction code **SICF**.
+ 1. Go to the service path `/default_host/sap/bc/soap/`.
+ 1. Activate the services **wsdl**, **wsdl11** and **RFC**.
+
+It's also recommended to check that you enabled the ICF ports.
-1. Run the following command. Replace `<instance number>` with your system's instance number.
+4. **SMON** - Enable **SMON** to monitor the system performance.Make sure the version of **ST-PI** is **SAPK-74005INSTPI**. You'll see empty visualization as part of the workbook when it isn't configured.
- ```bash
- sapcontrol -nr <instance number>; -function ParameterValue service/protectedwebmethods
- ```
+ 1. Enable the **SDF/SMON** snapshot service for your system. Turn on daily monitoring. For instructions, see [SAP Note 2651881](https://userapps.support.sap.com/sap/support/knowledge/en/2651881).
+ 2. Configure **SDF/SMON** metrics to be aggregated every minute.
+ 3. recommended scheduling **SDF/SMON** as a background job in your target SAP client each minute.
-1. Log in as another user.
+### Adding NetWeaver provider
-1. Run the following command. Again, replace `<instance number>` with your system's instance number. Also replace `<admin user>` with your administrator username, and `<admin password>` with the password.
+Ensure all the pre-requisites are successfully completed. To add the NetWeaver provider:
- ```bash
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the Azure Monitor for SAP solutions service page.
+1. Select **Create** to open the resource creation page.
+1. Enter information for the **Basics** tab.
+1. Select the **Providers** tab. Then, select **Add provider**.
+1. Configure the new provider:
+ 1. For **Type**, select **SAP NetWeaver**.
+ 2. For **Name**, provide a unique name for the provider
+ 3. For **System ID (SID)**, enter the three-character SAP system identifier.
+ 4. For **Application Server**, enter the IP address or the fully qualified domain name (FQDN) of the SAP NetWeaver system to monitor. For example, `sapservername.contoso.com` where `sapservername` is the hostname and `contoso.com` is the domain. If you're using a hostname, make sure there's connectivity from the virtual network that you used to create the Azure Monitor for SAP solutions resource.
+ 5. For **Instance number**, specify the instance number of SAP NetWeaver (00-99)
+ 6. For **Connection type** - select either [SOAP](#prerequisite-unprotect-methods-for-metrics) + [RFC](#prerequisite-to-enable-rfc-metrics) or [SOAP](#prerequisite-unprotect-methods-for-metrics) based on the metric collected (refer above section for details)
+ 7. For **SAP client ID**, provide the SAP client identifier.
+ 8. For **SAP ICM HTTP Port**, enter the port that the ICM is using, for example, 80(NN) where (NN) is the instance number.
+ 9. For **SAP username**, enter the name of the user that you created to connect to the SAP system.
+ 10. For **SAP password**, enter the password for the user.
+ 11. For **Host file entries**, provide the DNS mappings for all SAP VMs associated with the SID
+ Enter **all SAP application servers and ASCS** host file entries in **Host file entries**. Enter host file mappings in comma-separated format. The expected format for each entry is IP address, FQDN, hostname. For example: **192.X.X.X sapservername.contoso.com sapservername,192.X.X.X sapservername2.contoso.com sapservername2**. Make sure that host file entries are provided for all hostnames that the [command returns](#determine-all-hostname-associated-with-an-sap-system)
+
+## Troubleshooting for SAP Netweaver Provider
+
+List of common commands and troubleshooting solution for errors.
+
+### Ensuring Internet communication Framework port is open
+
+1. Sign in to the SAP system
+2. Go to transaction code **SICF**.
+3. Navigate to the service path `/default_host/sap/bc/soap/`.
+3. Right-click the ping service and choose **Test Service**. SAP starts your default browser.
+4. If the port can't be reached, or the test fails, open the port in the SAP VM.
+
+ 1. For Linux, run the following commands. Replace `<your port>` with your configured port.
+
+ ```bash
+ sudo firewall-cmd --permanent --zone=public --add-port=<your port>/TCP
+ ```
+ ```bash
+ sudo firewall-cmd --reload
+ ```
+ 1. For Windows, open Windows Defender Firewall from the Start menu. Select **Advanced settings** in the side menu, then select **Inbound Rules**. To open a port, select **New Rule**. Add your port and set the protocol to TCP.
+
+### Check for unprotected updated rules
+
+After you restart the SAP service, check that your updated rules are applied to each instance.
+
+1. When Sign in to the SAP system as `sidadm`. Run the following command. Replace `<instance number>` with your system's instance number.
+
+ ```Command to list unprotectedmethods
+ sapcontrol -nr <instance number> -function ParameterValue service/protectedwebmethods
+ ```
+
+1. When sign in as non SIDADM user. Run the following command, replace `<instance number>` with your system's instance number, `<admin user>` with your administrator username, and `<admin password>` with the password.
+
+ ```Command to list unprotectedmethods
sapcontrol -nr <instance number> -function ParameterValue service/protectedwebmethods -user "<admin user>" "<admin password>" ```
-1. Review the output.
+1. Review the output. Ensure in the output you see the name of methods **GetQueueStatistic ABAPGetWPTable EnqGetStatistic GetProcessList**
1. Repeat the previous steps for each instance profile.
To validate the rules, run a test query against the web methods. Replace the `<h
$sapcntrl.$Function($FunctionObject) ``` -
-### Set up RFC metrics
-
-For AS ABAP applications only, you can set up the NetWeaver RFC metrics.
-
-Create or upload the following role in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses least privilege access.
-
-1. Log in to your SAP system.
-1. Download and unzip [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/files/8710130/Z_AMS_NETWEAVER_MONITORING.zip).
-1. Use the transaction code **PFCG** &gt; **Role Upload**.
-1. Upload the **Z_AMS_NETWEAVER_MONITORING.SAP** file from the ZIP file.
-1. Select **Execute** to generate the role.
-1. Exit the SAP system.
-
-Create and authorize a new RFC user.
-
-1. Log in to the SAP system.
-1. Create an RFC user.
-1. Assign the role **Z_AMS_NETWEAVER_MONITORING** to the user. This is the role that you uploaded in the previous section.
-
-Enable **SMON** to monitor the system performance.
-
-1. Enable the **SDF/SMON** snapshot service for your system.
-1. Configure **SDF/SMON** metrics to be aggregated every minute.
-1. Make sure the version of **ST-PI** is **SAPK-74005INSTPI**.
-1. Turn on daily monitoring. For instructions, see [SAP Note 2651881](https://userapps.support.sap.com/sap/support/knowledge/en/2651881).
-1. It's recommended to schedule **SDF/SMON** as a background job in your target SAP client each minute. Log in to SAP and use **TCODE /SDF/SMON** to configure the setting.
-
-Enable SAP Internet Communication Framework (ICF):
-
-1. Log in to the SAP system.
-1. Go to transaction code **SICF**.
-1. Go to the service path `/default_host/sap/bc/soap/`.
-1. Activate the services **wsdl**, **wsdl11** and **RFC**.
-
-It's also recommended to check that you enabled the ICF ports.
-
-1. Log in to the SAP service.
-1. Right-click the ping service and choose **Test Service**. SAP starts your default browser.
-1. Navigate to the ping service using the configured port.
-1. If the port can't be reached, or the test fails, open the port in the SAP VM.
+### Determine all hostname associated with an SAP system
-1. For Linux, run the following commands. Replace `<your port>` with your configured port.
+To determine all SAP hostnames associated with the SID, Sign in to the SAP system using the `sidadm` user. Then, run the following command:
- ```bash
- sudo firewall-cmd --permanent --zone=public --add-port=<your port>/TCP
- ```
+ ```Command to find list of instances associated to given instance
+ /usr/sap/hostctrl/exe/sapcontrol -nr <instancenumber> -function GetSystemInstanceList
+ ```
- ```bash
- sudo firewall-cmd --reload
- ```
-
-1. For Windows, open Windows Defender Firewall from the Start menu. Select **Advanced settings** in the side menu, then select **Inbound Rules**. To open a port, select **New Rule**. Add your port and set the protocol to TCP.
-
-### Add NetWeaver provider
-
-To add the NetWeaver provider:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the Azure Monitor for SAP solutions service page.
-1. Select **Create** to open the resource creation page.
-1. Enter information for the **Basics** tab.
-1. Select the **Providers** tab. Then, select **Add provider**.
-1. Configure the new provider:
- 1. For **Type**, select **SAP NetWeaver**.
- 1. For **System ID (SID)**, enter the three-character SAP system identifier.
- 1. For **Application Server**, enter the IP address or the fully qualified domain name (FQDN) of the SAP NetWeaver system to monitor. For example, `sapservername.contoso.com` where `sapservername` is the hostname and `contoso.com` is the domain.
-1. Save your changes.
+### Common errors and possible solutions
-If you're using a hostname, make sure there's connectivity from the virtual network that you used to create the Azure Monitor for SAP solutions resource.
--- For **Instance number**, specify the instance number of SAP NetWeaver (00-99)-- For **Host file entries**, provide the DNS mappings for all SAP VMs associated with the SID.-
-Enter all SAP application servers and ASCS host file entries in **Host file entries**.
+#### Methods incorrectly unprotected in RZ10
+The provider settings validation operation has failed with code ΓÇÿSOAPWebMethodsValidationFailedΓÇÖ.
+
+Possible Causes: The operation failed with error: ΓÇÿError occurred while validating SOAP client API calls for SAP system saptstgtmci.redmond.corp.microsoft.com [ΓÇÿABAPGetWPTable ΓÇô [[ΓÇ£HTTP 401 UnauthorizedΓÇ¥, [ΓÇ£SAPSYSTEM1_10ΓÇ¥, ΓÇ£SAPSYSTEM2_10ΓÇ¥, ΓÇ£SAPSYSTEM3_10ΓÇ¥]]]ΓÇÖ, ΓÇÿGetQueueStatistic ΓÇô [[ΓÇ£HTTP 401 UnauthorizedΓÇ¥, [ΓÇ£SAPSYSTEM1_10ΓÇ¥, ΓÇ£SAPSYSTEM2_10ΓÇ¥, ΓÇ£SAPSYSTEM3_10ΓÇ¥]]]ΓÇÖ].ΓÇÖ.
+
+Recommended Action: ΓÇÿEnsure that the SOAP web service methods are unprotected correctly. For more information, see'.
+(Code: ProviderInstanceValidationOperationFailed)
+
+#### Incorrect username and password
+The provider settings validation operation has failed with code 'NetWeaverAuthenticationFailed'.
+
+Possible Causes: The operation failed with error: 'Authentication failed, incorrect SAP NetWeaver username, password or client id.'.
+
+Recommended Action: 'Please check the mandatory parameters username, password or client id are provided correctly.'.
+(Code: ProviderInstanceValidationOperationFailed)
- Enter host file mappings in comma-separated format. The expected format for each entry is IP address, FQDN, hostname.
+#### WSDL11 is inactive in SICF
+The provider settings validation operation has failed with code 'NetWeaverRfcSOAPWSDLInactive'.
+
+Possible Causes: The operation failed with error: 'WSDL11 is inactive in the SAP System: (SID).
+Error occurred while validating RFC SOAP client API calls for SAP system.
+
+Recommended Action: 'Please check the WSDL11 service node is active, refer to SICF Transaction in SAP System to activate the service'.
+(Code: ProviderInstanceValidationOperationFailed)
+
+#### Roles incorrectly uploaded and profile not activated
+
+The provider settings validation operation has failed with code ΓÇÿNetWeaverRFCAuthorizationFailedΓÇÖ.
+
+Possible Causes: Authentication failed, roles file isn't uploaded in the SAP System.
+
+Recommended Action: Ensure that the roles file is uploaded correctly in SAP System. For more information, see.
+(Code: ProviderInstanceValidationOperationFaile)
- For example: **192.X.X.X sapservername.contoso.com sapservername,192.X.X.X sapservername2.contoso.com sapservername2**
-
- To determine all SAP hostnames associated with the SID, log in to the SAP system using the `sidadm` user. Then, run the following command:
-
- `/usr/sap/hostctrl/exe/sapcontrol -nr <instancenumber> -function GetSystemInstanceList`
-
-Make sure that host file entries are provided for all hostnames that the command returns.
--- For **SAP client ID**, provide the SAP client identifier.-- For **SAP HTTP Port**, enter the port that your ICF is running. For example, 8110.-- For **SAP username**, enter the name of the user that you created to connect to the SAP system.-- For **SAP password**, enter the password for the user.
+#### Incorrect input provided
+The provider settings validation operation has failed with code 'SOAPApiConnectionError'.
+
+Possible Causes: The operation failed with error: 'Unable to reach the hostname: (hostname) with the input provided.
+
+Recommended Action: 'check the input hostname, instance number, and host file entries. '.
+(Code: ProviderInstanceValidationOperationFailed)
+
## Configure NetWeaver for Azure Monitor for SAP solutions (classic)
To fetch specific metrics, you need to unprotect some methods for the current re
After updating the parameter, restart the **SAPStartSRV** service on each of the instances in the SAP system. Restarting the services doesn't restart the SAP system. Only the **SAPStartSRV** service (in Windows) or daemon process (in Unix/Linux) is restarted.
-You must restart **SAPStartSRV** on each instance of the SAP system for the SAPControl web methods to be unprotected. These read-only SOAP API are required for the NetWeaver provider to fetch metric data from the SAP system. Failure to unprotect these methods leads to empty or missing visualizations on the NetWeaver metric workbook.
+You must restart **SAPStartSRV** on each instance of the SAP system for the SAP Control web methods to be unprotected. These read-only SOAP APIs are required for the NetWeaver provider to fetch metric data from the SAP system. Failure to unprotect these methods results empty or missing visualizations on the NetWeaver metric workbook.
On Windows, open the SAP Microsoft Management Console (MMC) / SAP Management Console (MC). Right-click on each instance and select **All Tasks** &gt; **Restart Service**.
For example, if the hostname of the SAP system has an FQDN of `myhost.mycompany.
- The hostname is `myhost` - The subdomain is `mycompany.contoso.com`
-When the NetWeaver provider invokes the **GetSystemInstanceList** API on the SAP system, SAP returns the hostnames of all instances in the system. The collect VM uses this list to make more API calls to fetch metrics for each instances features. For example, ABAP, J2EE, MESSAGESERVER, ENQUE, ENQREP, and more. If you specify the subdomain, the collect VM uses the subdomain to build the FQDN of each instance in the system.
+When the NetWeaver provider invokes the **GetSystemInstanceList** API on the SAP system, SAP returns the hostnames of all instances in the system. The collect VM uses this list to make more API calls to fetch metrics for each instances feature. For example, ABAP, J2EE, MESSAGESERVER, ENQUE, ENQREP, and more. If you specify the subdomain, the collect VM uses the subdomain to build the FQDN of each instance in the system.
Don't specify an IP address for the hostname if your SAP system is part of network domain.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- October 31, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) to fix script location for DRBD 9.0
- October 31, 2022: Change in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) to update the guideline for sizing `/hana/shared` - October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to table in [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_sapase.md) - October 20, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to indicate that we are de-emphasizing SAP reference architectures, utilizing NFS clusters
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
vm-windows Previously updated : 10/20/2022 Last updated : 10/25/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
} common { handlers {
- fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
- after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
+ fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
+ after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; }
The following items are prefixed with either **[A]** - applicable to all nodes,
disk { on-io-error detach; }
+ net {
+ fencing resource-and-stonith;
+ }
on <b>prod-nfs-0</b> { address <b>10.0.0.6:7790</b>; device /dev/drbd<b>0</b>;
The following items are prefixed with either **[A]** - applicable to all nodes,
disk { on-io-error detach; }
+ net {
+ fencing resource-and-stonith;
+ }
on <b>prod-nfs-0</b> { address <b>10.0.0.6:7791</b>; device /dev/drbd<b>1</b>;
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
+
+ Title: Configure cross-tenant connection in Azure Virtual Network Manager - CLI
+description: Learn to connect Azure subscriptions in Azure Virtual Network Manager using cross-tenant connections for the management of virtual networks across subscriptions.
++++ Last updated : 11/1/2022+
+#customerintent: As a cloud admin, in need to manage multi tenants from a single network manager instance. Cross tenant functionality will give me this so I can easily manage all network resources governed by azure virtual network manager
++
+# Configure cross-tenant connection in Azure Virtual Network Manager
+
+In this article, youΓÇÖll learn how-to create cross-tenant connections in Azure Virtual Network Manager using [Azure CLI](/cli/azure/network/manager/scope-connection). Cross-tenant support allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants and verify. Once completed, You can centrally manage the resources of other tenants from a central network manager instance.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- Two Azure tenants with virtual networks needing to be managed by Azure Virtual Network Manager Deploy. During the how-to, the tenants will be referred to as follows:
+ - **Central management tenant** - The tenant where an Azure Virtual Network Manager instance is installed, and you'll centrally manage network groups from cross-tenant connections.
+ - **Target managed tenant** - The tenant containing virtual networks to be managed. This tenant will be connected to the central management tenant.
+- Azure Virtual Network Manager deployed in the central management tenant.
+- Required permissions include:
+ - Administrator of central management tenant has guest account in target managed tenant.
+ - Administrator guest account has *Network Contributor* permissions applied at appropriate scope level(Management group, subscription, or virtual network).
+
+Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md), and how to [assign user roles to resources in Azure portal](../role-based-access-control/role-assignments-portal.md)
+
+## Create scope connection within network manager
+
+Creation of the scope connection begins on the central management tenant with a network manager deployed, which is the network manager where you plan to manage all of your resources across tenants. In this task, you'll set up a scope connection to add a subscription from a target tenant. If you wish to use a management group, you'll modify the `ΓÇôresource-id` argument to look like `/providers/Microsoft.Management/managementGroups/{mgId}`.
+
+```azurecli
+# Create scope connection in network manager in the central management tenant
+az network manager scope-connection create --resource-group "myRG" --network-manager-name "myAVNM" --name "ToTargetManagedTenant" --description "This is a connection to manage resources in the target managed tenant" --resource-id "/subscriptions/13579864-1234-5678-abcd-0987654321ab" --tenant-id "24680975-1234-abcd-56fg-121314ab5643"
+```
+
+## Create network manager connection on subscription in other tenant
+Once the scope connection is created, you'll switch to your target tenant for the network manager connection. During this task, you'll connect the target tenant to the scope connection created previously and verify the connection state.
+
+1. Enter the following command to connect to the target managed tenant with your administrative account:
+
+ ```azurecli
+
+ # Login to target managed tenant
+ # Note: Change the --tenant value to the appropriate tenant ID
+ az login --tenant "12345678-12a3-4abc-5cde-678909876543"
+ ```
+ You'll be required to complete authentication with your organization based on your organizations policies.
+
+1. Enter the following command to create the cross tenant connection on the central management.
+Set the subscription (note itΓÇÖs the same as the one the connection references in step 1).
+
+ ```azurecli
+ # Set the Azure subscription
+ az account set --subscription 87654321-abcd-1234-1def-0987654321ab
++
+ # Create cross-tenant connection to central management tenant
+ az network manager connection subscription create --connection-name "toCentralManagementTenant" --description "This connection allows management of the tenant by a central management tenant" --network-manager-id "/subscriptions/13579864-1234-5678-abcd-0987654321ab/resourceGroups/myRG/providers/Microsoft.Network/networkManagers/myAVNM"
+ ```
+
+## Verify the connection state
+
+1. Enter the following command to check the connection Status:
+
+ ```azurecli
+ # Check connection status
+ az network manager connection subscription show --name "toCentralManagementTenant"
+ ```
+
+1. Switch back to the central management tenant, and performing a get on the network manager shows the subscription added via the cross tenant scopes property.
+
+ ```azurecli
+ # View subscription added to network manager
+ az network manager show --resource-group myAVNMResourceGroup --name myAVNM
+ ```
+
+## Add static members to your network group
+In this task, you'll add a cross-tenant virtual network to your network group with static membership. The virtual network subscription used below is the same as referenced when creating connections above.
+
+```azurecli
+# Create network group with static member from target managed tenant
+az network manager group static-member create --network-group-name "CrossTenantNetworkGroup" --network-manager-name "myAVNM" --resource-group "myAVNMResourceGroup" --static-member-name "targetVnet01" --resource-id="/subscriptions/87654321-abcd-1234-1def-0987654321ab
+/resourceGroups/myScopeAVNM/providers/Microsoft.Network/virtualNetworks/targetVnet01"
+```
+## Delete virtual network manager configurations
+
+Now that the virtual network is in the network group, configurations will be applied. To remove the static member or cross-tenant resources, use the corresponding delete commands.
+
+```azurecli
+
+# Delete static member group
+az network manager group static-member delete --network-group-name "CrossTenantNetworkGroup" --network-manager-name " myAVNM" --resource-group "myRG" --static-member-name "fabrikamVnetΓÇ¥
+
+# Delete scope connections
+az network manager scope-connection delete --resource-group "myRG" --network-manager-name "myAVNM" --name "ToTargetManagedTenant"
+
+# Switch to ΓÇÿmanaged tenantΓÇÖ if needed
+#
+az network manager connection subscription delete --name "toCentralManagementTenant"
+
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+
+- Learn more about [Security admin rules](concept-security-admins.md).
+
+- Learn how to [create a mesh network topology with Azure Virtual Network Manager using the Azure portal](how-to-create-mesh-network.md)
+
+- Check out the [Azure Virtual Network Manager FAQ](faq.md)
virtual-network Create Peering Different Deployment Models Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models-subscriptions.md
This tutorial uses different accounts for each subscription. If you're using an
--name myVnetAToMyVnetB \ --resource-group $rgName \ --vnet-name myVnetA \
- --remote-vnet-id /subscriptions/<SubscriptionB-id>/resourceGroups/Default-Networking/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnetB \
+ --remote-vnet /subscriptions/<SubscriptionB-id>/resourceGroups/Default-Networking/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnetB \
--allow-vnet-access ```
virtual-network Configure Public Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md
In this section, you'll create an Azure Firewall. You'll select the IP address y
## Change public IP address
-In this section, you'll change the public IP address associated with the firewall. A firewall must have at least one public IP address associated with its configuration.
+In this section, you'll change the public IP address associated with the firewall. A firewall must have at least one public IP address associated with its configuration. Note that the firewall's existing IP must not have any DNAT rules associated with it or the IP cannot be updated.
1. In the search box at the top of the portal, enter **Firewall**.
virtual-network Configure Public Ip Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-load-balancer.md
You'll learn how to change the frontend configuration of an outbound backend poo
Finally, the article reviews unique aspects of using public IPs and public IP prefixes with a load balancer. > [!NOTE]
-> Standard SKU load balancer and public IP are used for the examples in this article. For basic SKU load balancers, the procedures are the same except for the selection of SKU upon creation of the load balancer and public IP resource. Basic load balancers don't support outbound rules or public IP prefixes.
+> Standard SKU load balancer and public IP are used for the examples in this article. For basic SKU load balancers, the procedures are the same except for the selection of SKU upon creation of the load balancer and public IP resource. Basic load balancers don't support outbound rules or public IP prefixes. These procedures are also valid for a cross-region load balancer. For more information on cross-region load balancer, see [Cross-region load balancer](../../load-balancer/cross-region-overview.md).
## Prerequisites
To change the IP, you'll associate a new public IP address previously created wi
9. Verify the load balancer frontend displays the new IP address named **myStandardPublicIP-2**.
- > [!NOTE]
- > These procedures are valid for a cross-region load balancer. For more information on cross-region load balancer, see **[Cross-region load balancer](../../load-balancer/cross-region-overview.md)**.
-
+> [!NOTE]
+> This technique can be utilized when transitioning from a non-zonal frontend to a zone-redundant frontend in regions that support availability zones. See [Load Balancer and Availability Zones](../../load-balancer/load-balancer-standard-availability-zones.md)
## Add public IP prefix
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
The steps in this article detail the process to:
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- This tutorial requires version 2.28 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed. - Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`. - A customer owned IPv4 range to provision in Azure.
The following command creates a custom IP prefix in the specified region and res
```azurecli-interactive byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
- az network public-ip prefix create \
+ az network custom-ip prefix create \
--name myCustomIpPrefix \ --resource-group myResourceGroup \ --location westus2 \ --cidr ΓÇÿ1.2.3.0/24ΓÇÖ \
+ --zone 1 2 3
--authorization-message $byoipauth \ --signed-message $byoipauthsigned ```
virtual-network Create Custom Ip Address Prefix Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-cli.md
+
+ Title: Create a custom IPv6 address prefix - Azure CLI
+
+description: Learn about how to create a custom IPv6 address prefix using Azure CLI
++++ Last updated : 03/31/2022++
+# Create a custom IPv6 address prefix using Azure CLI
+
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Differences between using BYOIPv4 and BYOIPv6
+
+> [!IMPORTANT]
+> Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
+
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Note that global ranges must be /48 in size, while regional ranges must always be /64 size.
+
+* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
+
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- This tutorial requires version 2.37 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed.
+- Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.
+- A customer owned IPv4 range to provision in Azure.
+ - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Please refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
+
+## Provisioning for IPv6
+
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Note that some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-cli.md) to focus on the differences between IPv4 and IPv6.
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the global range resource.
+
+> [!IMPORTANT]
+> Although the resource for the global range will be associated with a region, the prefix will be advertised by the Microsoft WAN globally.
+
+```azurecli-interactive
+ az group create \
+ --name myResourceGroup \
+ --location westus2
+```
+
+### Provision a global custom IPv6 address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-authorization-message` and `-signed-message` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - CLI](create-custom-ip-address-prefix-cli.md).) Note that no zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+
+```azurecli-interactive
+ byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|2a05:f500:2::/48|yyyymmdd"
+
+ az network custom-ip prefix create \
+ --name myCustomIPv6GlobalPrefix \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ2a05:f500:2::/48ΓÇÖ \
+ --authorization-message $byoipauth \
+ --signed-message $byoipauthsigned
+```
+
+### Provision a regional custom IPv6 address prefix
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+
+```azurecli-interactive
+ az network custom-ip prefix create \
+ --name myCustomIPv6RegionalPrefix \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ2a05:f500:2:1::/64ΓÇÖ \
+ --zone 1 2 3
+```
+
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+
+> [!IMPORTANT]
+> Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
+
+### Commission the custom IPv6 address prefixes
+
+When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IPv6 prefixes in their respective regions. Create public IPv6 prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IPv6 prefix and test connectivity to the IPs within the region. Repeat for each regional custom IPv6 prefix.
+3. After all regional custom IPv6 prefixes (and derived prefixes/IPs) have been verified to work as expected, commission the global custom IPv6 prefix, which will advertise the larger range to the Internet.
+
+Using the example ranges above, the command sequence would be:
+
+```azurecli-interactive
+az network custom-ip prefix update \
+ --name myCustomIPv6GlobalPrefix \
+ --resource-group myResourceGroup \
+ --state commission
+```
+
+Followed by:
+
+```azurecli-interactive
+az network custom-ip prefix update \
+ --name myCustomIPv6RegionalPrefix \
+ --resource-group myResourceGroup \
+ --state commission
+```
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, note that this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md).
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
+
+ Title: Create a custom IPv6 address prefix - Azure PowerShell
+
+description: Learn about how to create a custom IPv6 address prefix using Azure PowerShell
++++ Last updated : 03/31/2022++
+# Create a custom IPv6 address prefix using Azure PowerShell
+
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Differences between using BYOIPv4 and BYOIPv6
+
+> [!IMPORTANT]
+> Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
+
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Note that global ranges must be /48 in size, while regional ranges must always be /64 size.
+
+* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
+
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell.
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Ensure your Az.Network module is 4.21.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- A customer owned IP range to provision in Azure.
+ - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Please refer to the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md#pre-provisioning-steps) for details. Note all these steps should be completed for the IPv6 global (parent) range.
+
+## Provisioning for IPv6
+
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Note that some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md) to focus on the differences between IPv4 and IPv6.
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the global range resource.
+
+> [!IMPORTANT]
+> Although the resource for the global range will be associated with a region, the prefix will be advertised by the Microsoft WAN globally.
+
+ ```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'WestUS2'
+}
+New-AzResourceGroup @rg
+```
+
+### Provision a global custom IPv6 address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-AuthorizationMessage` and `-SignedMessage` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - PowerShell](create-custom-ip-address-prefix-powershell.md).) Note that no zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+
+ ```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPv6GlobalPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'WestUS'
+ CIDR = '2a05:f500:2::/48'
+ AuthorizationMessage = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|2a05:f500:2::/48|yyyymmdd'
+ SignedMessage = $byoipauthsigned
+}
+$myCustomIpPrefix = New-AzCustomIPPrefix @prefix
+```
+
+### Provision a regional custom IPv6 address prefix
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+
+ ```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPv6RegionalPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'EastUS2'
+ CIDR = '2a05:f500:2:1::/64'
+}
+$myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3
+```
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+
+> [!IMPORTANT]
+> Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
+
+### Commission the custom IPv6 address prefixes
+
+When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IPv6 prefixes in their respective regions. Create public IPv6 prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IPv6 prefix and test connectivity to the IPs within the region. Repeat for each regional custom IPv6 prefix.
+3. After all regional custom IPv6 prefixes (and derived prefixes/IPs) have been verified to work as expected, commission the global custom IPv6 prefix, which will advertise the larger range to the Internet.
+
+Using the example ranges above, the command sequence would be:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPv6RegionalPrefix.Id -Commission
+```
+Followed by:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPv6GlobalPrefix.Id -Commission
+```
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, note that this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md).
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
## Next steps -- To create a custom IP address prefix using the Azure portal, see [Create custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
+- To create a custom IP address prefix using the Azure portal, see [Create custom IPv4 address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
-- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
+- To create a custom IP address prefix using PowerShell, see [Create a custom IPv4 address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
- For more information about the management of a custom IP address prefix, see [Manage a custom IP address prefix](create-custom-ip-address-prefix-powershell.md).
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
Before you decommission a custom IP prefix, ensure it has no public IP prefixes
To migrate a custom IP prefix, it must first be deprovisioned from one region. A new custom IP prefix with the same CIDR can then be created in another region.
-### Any special considerations using IPv6
+### Are there any special considerations when using IPv6
-Yes - there are multiple differences for provisioning and commissioning when using BYOIPv6. Please see [Create a custom IP address prefix - IPv6](create-custom-ip-address-prefix-ipv6.md) for more details.
+Yes - there are multiple differences for provisioning and commissioning when using BYOIPv6. Please see [Create a custom IPv6 address prefix - PowerShell](create-custom-ip-address-prefix-ipv6-powershell.md) for more details.
### Status messages
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **GatewayManager** | Management traffic for deployments dedicated to Azure VPN Gateway and Application Gateway. | Inbound | No | No | | **GuestAndHybridManagement** | Azure Automation and Guest Configuration. | Outbound | No | Yes | | **HDInsight** | Azure HDInsight. | Inbound | Yes | No |
-| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=41653). | Both | No | No |
+| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
| **LogicApps** | Logic Apps. | Both | No | No | | **LogicAppsManagement** | Management traffic for Logic Apps. | Inbound | No | No | | **M365ManagementActivityApi** | The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | No |
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
New-AzNetworkInterface @nic
In this example, you'll create an Azure Public IP address and associate it with the network interface.
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create)to create a primary public IP address.
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a primary public IP address.
```azurecli-interactive az network public-ip create \
To perform tasks on network interfaces, your account must be assigned to the [ne
- Create a single NIC VM with multiple IPv4 addresses using the [Azure CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md) or [PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md) -- Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer) using the [Azure CLI](../load-balancer/load-balancer-ipv6-internet-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer) using the [Azure CLI](../load-balancer/load-balancer-ipv6-internet-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
Service endpoints can be configured on virtual networks independently by a user
For more information about built-in roles, see [Azure built-in roles](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json). For more information about assigning specific permissions to custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-Virtual networks and Azure service resources can be in the same or different subscriptions. Certain Azure Services (not all) such as Azure Storage and Azure Key Vault also support service endpoints across different Active Directory(AD) tenant. This means the virtual network and Azure service resource can be in different Active Directory (AD) tenants. Check individual service documentation for more details.
+Virtual networks and Azure service resources can be in the same or different subscriptions. Certain Azure Services (not all) such as Azure Storage and Azure Key Vault also support service endpoints across different Active Directory (AD) tenants. This means the virtual network and Azure service resource can be in different Active Directory (AD) tenants. Check individual service documentation for more details.
## Pricing and limits
For FAQs, see [Virtual Network Service Endpoint FAQs](./virtual-networks-faq.md#
- [Secure an Azure Synapse Analytics to a virtual network](/azure/azure-sql/database/vnet-service-endpoint-rule-overview?toc=%2fazure%2fsql-data-warehouse%2ftoc.json) - [Compare Private Endpoints and Service Endpoints](./vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints) - [Virtual Network Service Endpoint Policies](./virtual-network-service-endpoint-policies-overview.md)-- [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/vnet-2subnets-service-endpoints-storage-integration)
+- [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/vnet-2subnets-service-endpoints-storage-integration)
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
No. Multicast and broadcast are not supported.
You can use TCP, UDP, and ICMP TCP/IP protocols within VNets. Unicast is supported within VNets, with the exception of Dynamic Host Configuration Protocol (DHCP) via Unicast (source port UDP/68 / destination port UDP/67) and UDP source port 65330 which is reserved for the host. Multicast, broadcast, IP-in-IP encapsulated packets, and Generic Routing Encapsulation (GRE) packets are blocked within VNets. ### Can I ping default gateway within a VNet?
-No. Azure provided default gateway does not respond ping. But you can use ping in your VNets to check connectivity and troubleshooting between VMs.
+No. Azure provided default gateway does not respond to ping. But you can use ping in your VNets to check connectivity and troubleshooting between VMs.
### Can I use tracert to diagnose connectivity? Yes.
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Previously updated : 03/22/2022 Last updated : 11/01/2022
Azure Web Application Firewall (WAF) with Front Door allows you to control acces
For more information on rate limiting, see [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).
-## Priority, match conditions, and action types
+## Priority, action types, and match conditions
You can control access with a custom WAf rule that defines a priority number, a rule type, an array of match conditions, and an action. -- **Priority:** is a unique integer that describes the order of evaluation of WAF rules. Rules with lower priority values are evaluated before rules with higher values. The rule evaluation stops on any rule action except for *Log*. Priority numbers must be unique among all custom rules.
+- **Priority**
-- **Action:** defines how to route a request if a WAF rule is matched. You can choose one of the below actions to apply when a request matches a custom rule.
+ A unique integer that describes the order of evaluation of WAF rules. Rules with lower priority values are evaluated before rules with higher values. The rule evaluation stops on any rule action except for *Log*. Priority numbers must be unique among all custom rules.
+
+- **Action**
+
+ Defines how to route a request if a WAF rule is matched. You can choose one of the below actions to apply when a request matches a custom rule.
- *Allow* - WAF allows the request to process, logs an entry in WAF logs, and exits. - *Block* - Request is blocked. WAF sends response to client without forwarding the request further. WAF logs an entry in WAF logs and exits. - *Log* - WAF logs an entry in WAF logs, and continues to evaluate the next rule in the priority order. - *Redirect* - WAF redirects the request to a specified URI, logs an entry in WAF logs, and exits. -- **Match condition:** defines a match variable, an operator, and match value. Each rule may contain multiple match conditions. A match condition may be based on geo location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables.
- - **Match variable:**
+- **Match condition**
+
+ Defines a match variable, an operator, and match value. Each rule may contain multiple match conditions. A match condition may be based on geo location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables.
+ - **Match variable**
- RequestMethod - QueryString - PostArgs
You can control access with a custom WAf rule that defines a priority number, a
- RequestHeader - RequestBody - Cookies
- - **Operator:**
- - Any: is often used to define default action if no rules are matched. Any is a match all operator.
+ - **Operator**
+ - Any - is often used to define default action if no rules are matched. Any is a match all operator.
- Equal - Contains - LessThan: size constraint
You can control access with a custom WAf rule that defines a priority number, a
- EndsWith - Regex
- - **Regex** does not support the following operations:
+ - **Regex**
+
+ Doesn't support the following operations:
+ - Backreferences and capturing subexpressions - Arbitrary zero-width assertions - Subroutine references and recursive patterns
You can control access with a custom WAf rule that defines a priority number, a
- Callouts and embedded code - Atomic grouping and possessive quantifiers
- - **Negate [optional]:**
+ - **Negate [optional]**
+ You can set the *negate* condition to true if the result of a condition should be negated.
- - **Transform [optional]:**
+ - **Transform [optional]**
+ A list of strings with names of transformations to do before the match is attempted. These can be the following transformations: - Uppercase - Lowercase
You can control access with a custom WAf rule that defines a priority number, a
- UrlDecode - UrlEncode
- - **Match value:**
+ - **Match value**
+
Supported HTTP request method values include: - GET - POST
You can control access with a custom WAf rule that defines a priority number, a
- MKCOL - COPY - MOVE
+ - PATCH
## Examples