Updates from: 05/09/2023 01:10:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
Previously updated : 02/13/2023 Last updated : 05/01/2023
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
-### Verify the application's publisher domain
-As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD](../active-directory-b2c/identity-provider-azure-ad-single-tenant.md) tenant as the identity provider. To meet these new requirements, do the following:
-
-1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
-1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
- - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md).
- - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
- ## Create a Microsoft account application To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
You've now configured your policy so that Azure AD B2C knows how to communicate
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
active-directory Export Import Provisioning Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
Previously updated : 10/20/2022 Last updated : 05/05/2023 # How-to: Export provisioning configuration and roll back to a known good state
-In this article, you'll learn how to:
+In this article, you learn how to:
- Export and import your provisioning configuration from the Azure portal - Export and import your provisioning configuration by using the Microsoft Graph API
To export your configuration:
1. In the [Azure portal](https://portal.azure.com/), on the left navigation panel, select **Azure Active Directory**. 1. In the **Azure Active Directory** pane, select **Enterprise applications** and choose your application.
-1. In the left navigation pane, select **provisioning**. From the provisioning configuration page, click on **attribute mappings**, then **show advanced options**, and finally **review your schema**. This will take you to the schema editor.
+1. In the left navigation pane, select **provisioning**. From the provisioning configuration page, click on **attribute mappings**, then **show advanced options**, and finally **review your schema**. The schema editor opens.
1. Click on download in the command bar at the top of the page to download your schema. ### Disaster recovery - roll back to a known good state
-Exporting and saving your configuration allows you to roll back to a previous version of your configuration. We recommend exporting your provisioning configuration and saving it for later use anytime you make a change to your attribute mappings or scoping filters. All you need to do is open up the JSON file that you downloaded in the steps above, copy the entire contents of the JSON file, replace the entire contents of the JSON payload in the schema editor, and then save. If there is an active provisioning cycle, it will complete and the next cycle will use the updated schema. The next cycle will also be an initial cycle, which reevaluates every user and group based on the new configuration. Consider the following when rolling back to a previous configuration:
+Exporting and saving your configuration allows you to roll back to a previous version of your configuration. We recommend exporting your provisioning configuration and saving it for later use anytime you make a change to your attribute mappings or scoping filters. Open the JSON file that you downloaded, copy the entire contents. Next, replace the entire contents of the JSON payload in the schema editor, and then save. If there's an active provisioning cycle, it completes and the next cycle uses the updated schema. The next cycle is also an initial cycle, which reevaluates every user and group based on the new configuration.
-- Users will be evaluated again to determine if they should be in scope. If the scoping filters have changed a user is not in scope any more they will be disabled. While this is the desired behavior in most cases, there are times where you may want to prevent this and can use the [skip out of scope deletions](./skip-out-of-scope-deletions.md) functionality.
+Some things to consider when rolling back to a previous configuration:
+
+- Users are evaluated again to determine if they should be in scope. If the scoping filters have changed, a user isn't in scope anymore because they're disabled. While the behavior is the desired in most cases, there are times where you may want to prevent it. To prevent the behavior, use the [skip out of scope deletions](./skip-out-of-scope-deletions.md) functionality.
- Changing your provisioning configuration restarts the service and triggers an [initial cycle](./how-provisioning-works.md#provisioning-cycles-initial-and-incremental). ## Export and import your provisioning configuration by using the Microsoft Graph API
You can use the Microsoft Graph API and the Microsoft Graph Explorer to export y
### Step 1: Retrieve your Provisioning App Service Principal ID (Object ID) 1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app.
-1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your App and it will be used in Microsoft Graph Explorer operations.
+1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your App and it's used in Microsoft Graph Explorer operations.
![Workday App Service Principal ID](./media/export-import-provisioning-configuration/wd_export_01.png)
You can use the Microsoft Graph API and the Microsoft Graph Explorer to export y
![Microsoft Graph Sign-in](./media/export-import-provisioning-configuration/wd_export_02.png)
-1. Upon successful sign-in, you will see the user account details in the left-hand pane.
+1. Upon successful sign-in, you see the user account details in the left-hand pane.
### Step 3: Retrieve the Provisioning Job ID of the Provisioning App
In the Microsoft Graph Explorer, run the following GET query replacing [serviceP
GET https://graph.microsoft.com/beta/servicePrincipals/[servicePrincipalId]/synchronization/jobs ```
-You will get a response as shown below. Copy the "id attribute" present in the response. This value is the **ProvisioningJobId** and will be used to retrieve the underlying schema metadata.
+You get a response as shown. Copy the `id` attribute present in the response. This value is the **ProvisioningJobId** and is used to retrieve the underlying schema metadata.
[![Provisioning Job ID](./media/export-import-provisioning-configuration/wd_export_03.png)](./media/export-import-provisioning-configuration/wd_export_03.png#lightbox)
In the "Request Headers" tab, add the Content-Type header attribute with value
[![Request Headers](./media/export-import-provisioning-configuration/wd_export_05.png)](./media/export-import-provisioning-configuration/wd_export_05.png#lightbox)
-Select **Run Query** to import the new schema.
+Select **Run Query** to import the new schema.
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from Gi
1. If not installed, add [Azure App Service for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) extension.
-1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vscode#publish-your-web-app).
+1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vscode#2-publish-your-web-app).
1. In the Visual Studio Code terminal, run the .NET CLI command. This command generates a deployable publish folder for the app in the bin/debug/publish directory.
The default token validation code is configured to use an Azure AD token and req
## Next steps
-To develop a SCIM-compliant user and group endpoint with interoperability for a client, see [SCIM client implementation](http://www.simplecloud.info/#Implementations2).
- - [Tutorial: Validate a SCIM endpoint](scim-validator-tutorial.md) - [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md) - [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Title: Use number matching in multifactor authentication (MFA) notifications
+ Title: How number matching works in multifactor authentication (MFA) push notifications for Microsoft Authenticator
description: Learn how to use number matching in MFA notifications Previously updated : 04/10/2023 Last updated : 05/08/2023
-# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
+# Customer intent: As an identity administrator, I want to explain how number matching in MFA push notifications from Authenticator in Azure AD works in different use cases.
-# How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy
+# How number matching works in multifactor authentication (MFA) push notifications for Authenticator - Authentication methods policy
-This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
+This topic covers how number matching in Microsoft Authenticator push notifications improves user sign-in security.
+Number matching is a key security upgrade to traditional second factor notifications in Authenticator.
->[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting May 8, 2023.<br>
->We highly recommend enabling number matching in the near term for improved sign-in security. Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
-
-## Prerequisites
--- Your organization needs to enable Microsoft Authenticator (traditional second factor) push notifications for some users or groups by using the new Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.
+Beginning May 8, 2023, number matching is enabled for all Authenticator push notifications.
+As relevant services deploy, users worldwide who are enabled for Authenticator push notifications will begin to see number matching in their approval requests.
+Users can be enabled for Authenticator push notifications either in the Authentication methods policy or the legacy multifactor authentication policy if **Notifications through mobile app** is enabled.
-- If your organization is using AD FS adapter or NPS extensions, upgrade to the latest versions for a consistent experience. -
-## Number matching
-
-Number matching can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication methods policy.
+## Number matching scenarios
Number matching is available for the following scenarios. When enabled, all scenarios support number matching.
Number matching isn't supported for push notifications for Apple Watch or Androi
### Multifactor authentication
-When a user responds to an MFA push notification using the Authenticator app, they'll be presented with a number. They need to type that number into the app to complete the approval. For more information about how to set up MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+When a user responds to an MFA push notification using Authenticator, they'll be presented with a number. They need to type that number into the app to complete the approval. For more information about how to set up MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
![Screenshot of user entering a number match.](media/howto-authentication-passwordless-phone/phone-sign-in-microsoft-authenticator-app.png) ### SSPR
-Self-service password reset (SSPR) with Microsoft Authenticator will require number matching when using Microsoft Authenticator. During self-service password reset, the sign-in page will show a number that the user will need to type into the Microsoft Authenticator notification. This number will only be seen by users who are enabled for number matching. For more information about how to set up SSPR, see [Tutorial: Enable users to unlock their account or reset passwords](howto-sspr-deployment.md).
+Self-service password reset (SSPR) with Authenticator requires number matching when using Authenticator. During self-service password reset, the sign-in page shows a number that the user needs to type into the Authenticator notification. For more information about how to set up SSPR, see [Tutorial: Enable users to unlock their account or reset passwords](howto-sspr-deployment.md).
### Combined registration
-Combined registration with Microsoft Authenticator will require number matching. When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification. For more information about how to set up combined registration, see [Enable combined security information registration](howto-registration-mfa-sspr-combined.md).
+Combined registration with Authenticator requires number matching. When a user goes through combined registration to set up Authenticator, the user needs to approve a notification to add the account. This notification shows a number that they need to type into the Authenticator notification. For more information about how to set up combined registration, see [Enable combined security information registration](howto-registration-mfa-sspr-combined.md).
### AD FS adapter
-AD FS adapter will require number matching on supported versions of Windows Server. On earlier versions, users will continue to see the **Approve**/**Deny** experience and wonΓÇÖt see number matching until you upgrade. The AD FS adapter supports number matching only after installing one of the updates in the following table. For more information about how to set up AD FS adapter, see [Configure Azure Active Directory (Azure AD) Multi-Factor Authentication Server to work with AD FS in Windows Server](howto-mfaserver-adfs-windows-server.md).
+AD FS adapter requires number matching on supported versions of Windows Server. On earlier versions, users continue to see the **Approve**/**Deny** experience and donΓÇÖt see number matching until you upgrade. The AD FS adapter supports number matching only after you install one of the updates in the following table. For more information about how to set up AD FS adapter, see [Configure Azure Active Directory (Azure AD) Multi-Factor Authentication Server to work with AD FS in Windows Server](howto-mfaserver-adfs-windows-server.md).
>[!NOTE]
->Unpatched versions of Windows Server don't support number matching. Users will continue to see the **Approve**/**Deny** experience and won't see number matching unless these updates are applied.
+>Unpatched versions of Windows Server don't support number matching. Users continue to see the **Approve**/**Deny** experience and don't see number matching unless these updates are applied.
| Version | Update | ||--|
AD FS adapter will require number matching on supported versions of Windows Serv
### NPS extension
-Although NPS doesn't support number matching, the latest NPS extension does support time-based one-time password (TOTP) methods such as the TOTP available in Microsoft Authenticator, other software tokens, and hardware FOBs. TOTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
-
-After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with a TOTP method instead.
+Although NPS doesn't support number matching, the latest NPS extension does support time-based one-time password (TOTP) methods such as the TOTP available in Authenticator, other software tokens, and hardware FOBs. TOTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
+Anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later is prompted to sign in with a TOTP method instead of **Approve**/**Deny**.
Users must have a TOTP authentication method registered to see this behavior. Without a TOTP method registered, users continue to see **Approve**/**Deny**.
-Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter a TOTP:
+Organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter a TOTP:
- 1.2.2131.2 - 1.2.1959.1
To create the registry entry to override the **Approve**/**Deny** options in pus
In addition: -- Users who perform TOTP must have either Microsoft Authenticator registered as an authentication method, or some other hardware or software OATH token. A user who can't use an OTP method will always see **Approve**/**Deny** options with push notifications if they use a version of NPS extension earlier than 1.2.2216.1.-- Users must be [enabled for number matching](#enable-number-matching-in-the-portal).
+- Users who perform TOTP must have either Authenticator registered as an authentication method, or some other hardware or software OATH token. A user who can't use a TOTP method will always see **Approve**/**Deny** options with push notifications if they use a version of NPS extension earlier than 1.2.2216.1.
- The NPS Server where the NPS extension is installed must be configured to use PAP protocol. For more information, see [Determine which authentication methods your users can use](howto-mfa-nps-extension.md#determine-which-authentication-methods-your-users-can-use). >[!IMPORTANT]
- >MSCHAPv2 doesn't support TOTP. If the NPS Server isn't configured to use PAP, user authorization will fail with events in the **AuthZOptCh** log of the NPS Extension server in Event Viewer:<br>
+ >MSCHAPv2 doesn't support TOTP. If the NPS Server isn't configured to use PAP, user authorization fails with events in the **AuthZOptCh** log of the NPS Extension server in Event Viewer:<br>
>NPS Extension for Azure MFA: Challenge requested in Authentication Ext for User npstesting_ap.
- >You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications.
-
-If your organization uses Remote Desktop Gateway and the user is registered for a TOTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Microsoft Authenticator.
-This is because TOTP will be preferred over the **Approve**/**Deny** push notification and Remote Desktop Gateway doesn't provide the option to enter a verification code with Azure AD Multi-Factor Authentication. For more information, see [Configure accounts for two-step verification](howto-mfa-nps-extension-rdg.md#configure-accounts-for-two-step-verification).
-
-### Apple Watch supported for Microsoft Authenticator
-
-In the upcoming Microsoft Authenticator release in January 2023 for iOS, there will be no companion app for watchOS due to it being incompatible with Authenticator security features. You won't be able to install or use Microsoft Authenticator on Apple Watch. We therefore recommend that you [delete Microsoft Authenticator from your Apple Watch](https://support.apple.com/HT212064), and sign in with Microsoft Authenticator on another device.
-
-## Enable number matching in the portal
-
-To enable number matching in the Azure portal, complete the following steps:
-
-1. In the Azure portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
-1. On the **Enable and Target** tab, click **Yes** and **All users** to enable the policy for everyone or add selected users and groups. Set the **Authentication mode** for these users/groups to **Any** or **Push**.
-
- Only users who are enabled for Microsoft Authenticator here can be included in the policy to require number matching for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature.
-
- :::image type="content" border="true" source="./media/how-to-mfa-number-match/enable-settings-number-match.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
-
-1. On the **Configure** tab, for **Require number matching for push notifications**, change **Status** to **Enabled**, choose who to include or exclude from number matching, and click **Save**.
-
- :::image type="content" border="true" source="./media/how-to-mfa-number-match/number-match.png" alt-text="Screenshot of how to enable number matching.":::
-
-## Enable number matching using Graph APIs
-
-Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property under featureSettings to **enabled**, and include or exclude groups:
-
-```
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
-
->[!NOTE]
->In Graph Explorer, you'll need to consent to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+ >You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications.
-
-### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|||-|
-| id | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
-
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of users or groups who are enabled to use the authentication method |
-| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
-
-### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| id | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
---
-### MicrosoftAuthenticator featureSettings properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
-| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
-| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
-
-### Authentication method feature configuration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group for number matching. |
-| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for number matching.|
-| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
-
-### Feature target properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| id | String | ID of the entity targeted. |
-| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
-
->[!NOTE]
->Number matching can be enabled only for a single group.
-
-### Example of how to enable number matching for all users
-
-In **featureSettings**, you'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
-
->[!NOTE]
->For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-
-You might need to patch the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "all_users"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-
-```
-
-To confirm the change is applied, run the GET request by using the following endpoint:
-
-```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
-
-### Example of how to enable number matching for a single group
-
-In **featureSettings**, you'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-Inside the **includeTarget**, you'll need to change the **id** from **all_users** to the ObjectID of the group from the Azure portal.
-To remove an excluded group from number matching, change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
-
-You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any"
- }
- ]
-}
-```
-
-To verify, run GET again and verify the ObjectID:
-
-```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
+If your organization uses Remote Desktop Gateway and the user is registered for a TOTP code along with Authenticator push notifications, the user can't meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in fails. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Authenticator.
## FAQs
-### When will my tenant see number matching if I don't use the Azure portal or Graph API to roll out the change?
+### Can I opt out of number matching?
-Number match will be enabled for all users of Microsoft Authenticator push notifications after May 8, 2023. We had previously announced that we will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023. After listening to customers, we will extend the availability of the rollout controls for a few more weeks.
+No, users can't opt out of number matching in Authenticator push notifications.
-Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
+Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Authenticator push notifications in advance.
-### What happens to number matching settings that are currently configured for a group in the Authentication methods policy after number matching is enabled for Authenticator push notifications after May 8th, 2023?
+### Does number matching only apply if Authenticator is set as the default authentication method?
-When Microsoft begins protecting all organizations by enabling number matching after May 8th, 2023, administrators will see the **Require number matching for push notifications** setting on the **Configure** tab of the Microsoft Authenticator policy is set to **Enabled** for **All users** and can't be disabled. In addition, the **Exclude** option for this setting will be removed.
+If the user has a different default authentication method, there's no change to their default sign-in. If the default method is Authenticator, they get number matching.
-### What happens for users who aren't specified in the Authentication methods policy but they are enabled for Notifications through mobile app in the legacy MFA tenant-wide policy?
+Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications sees number matching. If prompted for another method, they won't see any change.
-Users who are enabled for MFA push notifications in the legacy MFA policy will also see number match after May 8th, 2023. If the legacy MFA policy has enabled **Notifications through mobile app**, users will see number matching regardless of whether or not it's enabled on the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy.
+### What happens for users who aren't specified in the Authentication methods policy but they are enabled for Notifications through mobile app in the legacy MFA tenant-wide policy?
+Users who are enabled for MFA push notifications in the legacy MFA policy will also see number match if the legacy MFA policy has enabled **Notifications through mobile app**. Users will see number matching regardless of whether they are enabled for Authenticator in the Authentication methods policy.
:::image type="content" border="true" source="./media/how-to-mfa-number-match/notifications-through-mobile-app.png" alt-text="Screenshot of Notifications through mobile app setting.":::
-### How should users be prepared for default number matching?
+### Why does the portal still show the control to enable number matching?
-Here are differences in sign-in scenarios that Microsoft Authenticator users will see after number matching is enabled by default:
--- Authentication flows will require users to do number match when using Microsoft Authenticator. If their version of Microsoft Authenticator doesnΓÇÖt support number match, their authentication will fail.-- Self-service password reset (SSPR) and combined registration will also require number match when using Microsoft Authenticator. -- AD FS adapter will require number matching on [supported versions of Windows Server](#ad-fs-adapter). On earlier versions, users will continue to see the **Approve**/**Deny** experience and wonΓÇÖt see number matching until you upgrade. -- NPS extension versions beginning 1.2.2131.2 will require users to do number matching. Because the NPS extension canΓÇÖt show a number, the user will be asked to enter a TOTP. The user must have a TOTP authentication method such as Microsoft Authenticator or software OATH tokens registered to see this behavior. If the user doesnΓÇÖt have a TOTP method registered, theyΓÇÖll continue to get the **Approve**/**Deny** experience.
-
- To create a registry entry that overrides this behavior and prompts users with **Approve**/**Deny**:
-
- 1. On the NPS Server, open the Registry Editor.
- 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa.
- 1. Create the following String/Value:
- - Name: OVERRIDE_NUMBER_MATCHING_WITH_OTP
- - Value = FALSE
- 1. Restart the NPS Service.
--- Apple Watch will remain unsupported for number matching. We recommend you uninstall the Microsoft Authenticator Apple Watch app because you have to approve notifications on your phone.-
-### How can users enter a TOTP with the NPS extension?
-
-The VPN and NPS server must be using PAP protocol for TOTP prompts to appear. If they're using a protocol that doesn't support TOTP, such as MSCHAPv2, they'll continue to see the **Approve/Deny** notifications.
-
-### Will users get a prompt similar to a number matching prompt, but will need to enter a TOTP?
-
-They'll see a prompt to supply a verification code. They must select their account in Microsoft Authenticator and enter the random generated code that appears there.
-
-### Can I opt out of number matching?
-
-Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. To protect the ecosystem and mitigate these threats, Microsoft will enable number matching for all tenants starting May 8, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
-
-Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
-
-### Does number matching only apply if Microsoft Authenticator is set as the default authentication method?
-
-If the user has a different default authentication method, there won't be any change to their default sign-in. If the default method is Microsoft Authenticator and the user is specified in either of the following policies, they'll start to receive number matching approval after May 8th, 2023:
--- Authentication methods policy (in the portal, click **Security** > **Authentication methods** > **Policies**)-- Legacy MFA tenant-wide policy (in the portal, click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**)-
-Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications will see number match after May 8th, 2023. If the user is prompted for another method, they won't see any change.
+You might need to refresh the browser to update the portal after number matching is enabled by default beginning May 8, 2023.
### Is number matching supported with MFA Server? No, number matching isn't enforced because it's not a supported feature for MFA Server, which is [deprecated](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454).
-### What happens if a user runs an older version of Microsoft Authenticator?
-
-If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in.
-
-### Why is my user prompted to tap on one of three numbers rather than enter the number in their Microsoft Authenticator app?
+### What happens if a user runs an older version of Authenticator?
-Older versions of Microsoft Authenticator prompt users to tap and select a number rather than enter the number in Microsoft Authenticator. These authentications won't fail, but Microsoft highly recommends that users upgrade to the latest version of Microsoft Authenticator.
+If a user is running an older version of Authenticator that doesn't support number matching, authentication won't work. Users need to upgrade to the latest version of Authenticator to use it for sign-in.
### How can users recheck the number on mobile iOS devices after the match request appears? During mobile iOS broker flows, the number match request appears over the number after a two-second delay. To recheck the number, click **Show me the number again**. This action only occurs in mobile iOS broker flows.
+### Is Apple Watch supported for Authenticator?
+
+In the Authenticator release in January 2023 for iOS, there is no companion app for watchOS due to it being incompatible with Authenticator security features. You can't install or use Authenticator on Apple Watch. We therefore recommend that you [delete Authenticator from your Apple Watch](https://support.apple.com/HT212064), and sign in with Authenticator on another device.
+ ## Next steps [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
In MSAL for Python, exceptions are rare because most errors are handled by retur
[!INCLUDE [Active directory error handling claims challenges](../../../includes/active-directory-develop-error-handling-claims-challenges.md)] +
+## Retrying after errors and exceptions
+
+MSAL makes HTTP calls to the Azure AD service, and occasionally failures can occur.
+For example the network can go down or the server is overloaded.
+
+MSAL Python 1.11+ automatically performs one retry attempt for you.
+You may customize this behavior by following
+[this instruction](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.params.http_client).
+
+### HTTP 429
+
+When the Service Token Server (STS) is overloaded with too many requests,
+it returns HTTP error 429 with a hint about how long until you can try again in the `Retry-After` response field.
+
+Your app was expected to throttle the subsequent requests, and only retry after the specified period.
+That was not an easy task.
+
+MSAL Python 1.16+ made it easy for you, in that your app could blindly retry in any given time
+(say, whenever the end user clicks the sign-in button again),
+MSAL Python 1.16+ would automatically throttle those retry attempts by returning same error response from an HTTP cache,
+and only sending out a real HTTP call when that call is attempted after the specified period.
+
+By default, this throttle mechanism works by saving throttle information into a built-in in-memory HTTP cache.
+You may provide your own `dict`-like object as the HTTP cache, which you can control how to persist its content.
+See [MSAL Python's API document](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.params.http_cache)
+for more details.
## Next steps
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
Title: Acquire a token to call a web API interactively (desktop app)
-description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively
+description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively.
Last updated 08/25/2021
-#Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
+#Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform for developers.
# Desktop app that calls web APIs: Acquire a token interactively
The following example shows minimal code to get a token interactively for readin
# [.NET](#tab/dotnet)
-### In MSAL.NET
+### Code in MSAL.NET
```csharp string[] scopes = new string[] {"user.read"};
catch(MsalUiRequiredException)
### Mandatory parameters
-`AcquireTokenInteractive` has only one mandatory parameter, ``scopes``, which contains an enumeration of strings that define the scopes for which a token is required. If the token is for Microsoft Graph, the required scopes can be found in the API reference of each Microsoft Graph API in the section named "Permissions." For instance, to [list the user's contacts](/graph/api/user-list-contacts), the scope "User.Read", "Contacts.Read" must be used. For more information, see [Microsoft Graph permissions reference](/graph/permissions-reference).
+`AcquireTokenInteractive` has only one mandatory parameter, `scopes`. It contains an enumeration of strings that define the scopes for which a token is required. If the token is for Microsoft Graph, you can find the required scopes in the API reference of each Microsoft Graph API in the section named "Permissions." For instance, to [list the user's contacts](/graph/api/user-list-contacts), you must use both `User.Read` and `Contacts.Read` as the scope. For more information, see [Microsoft Graph permissions reference](/graph/permissions-reference).
-On Android, you also need to specify the parent activity by using `.WithParentActivityOrWindow`, as shown, so that the token gets back to that parent activity after the interaction. If you don't specify it, an exception is thrown when calling `.ExecuteAsync()`.
+On both desktop and mobile applications, it's important to specify the parent by using `.WithParentActivityOrWindow`. In many cases, it's a requirement and MSAL will throw exceptions.
-### Specific optional parameters in MSAL.NET
+For desktop applications, see [Parent window handles](/azure/active-directory/develop/scenario-desktop-acquire-token-wam#parent-window-handles).
+
+For mobile applications, provide `Activity` (Android) or `UIViewController` (iOS).
+
+### Optional parameters in MSAL.NET
#### WithParentActivityOrWindow
-The UI is important because it's interactive. `AcquireTokenInteractive` has one specific optional parameter that can specify, for platforms that support it, the parent UI. When used in a desktop application, `.WithParentActivityOrWindow` has a different type, which depends on the platform. Alternatively you can omit the optional parent window parameter to create a window, if you do not want to control where the sign-in dialog appears on the screen. This would be applicable for applications which are command line based, used to pass calls to any other backend service and do not need any windows for user interaction.
+The UI is important because it's interactive. `AcquireTokenInteractive` has one specific optional parameter that can specify (for platforms that support it) the parent UI. When you use `.WithParentActivityOrWindow` in a desktop application, it has a different type that depends on the platform.
+
+Alternatively, you can omit the optional parent window parameter to create a window, if you don't want to control where the sign-in dialog appears on the screen. This option is applicable for applications that are based on a command line, are used to pass calls to any other back-end service, and don't need any windows for user interaction.
```csharp // net45
WithParentActivityOrWindow(object parent).
Remarks: -- On .NET Standard, the expected `object` is `Activity` on Android, `UIViewController` on iOS, `NSWindow` on Mac, and `IWin32Window` or `IntPr` on Windows.-- On Windows, you must call `AcquireTokenInteractive` from the UI thread so that the embedded browser gets the appropriate UI synchronization context. Not calling from the UI thread might cause messages to not pump properly and deadlock scenarios with the UI. One way of calling Microsoft Authentication Libraries (MSALs) from the UI thread if you aren't on the UI thread already is to use the `Dispatcher` on WPF.
+- On .NET Standard, the expected `object` value is `Activity` on Android, `UIViewController` on iOS, `NSWindow` on Mac, and `IWin32Window` or `IntPr` on Windows.
+- On Windows, you must call `AcquireTokenInteractive` from the UI thread so that the embedded browser gets the appropriate UI synchronization context. Not calling from the UI thread might cause messages to not pump properly and cause deadlock scenarios with the UI. One way of calling the Microsoft Authentication Library (MSAL) from the UI thread if you aren't on the UI thread already is to use `Dispatcher` on Windows Presentation Foundation (WPF).
- If you're using WPF, to get a window from a WPF control, you can use the `WindowInteropHelper.Handle` class. Then the call is from a WPF control (`this`): ```csharp
Remarks:
#### WithPrompt
-`WithPrompt()` is used to control the interactivity with the user by specifying a prompt. The exact behavior can be controlled by using the [Microsoft.Identity.Client.Prompt](/dotnet/api/microsoft.identity.client.prompt) structure.
+You use `WithPrompt()` to control the interactivity with the user by specifying a prompt. You can control the exact behavior by using the [Microsoft.Identity.Client.Prompt](/dotnet/api/microsoft.identity.client.prompt) structure.
+
+The structure defines the following constants:
-The struct defines the following constants:
+- `SelectAccount` forces the security token service (STS) to present the account selection dialog that contains accounts for which the user has a session. This option is the default. It's useful when you want to let users choose among different identities.
-- `SelectAccount` forces the STS to present the account selection dialog box that contains accounts for which the user has a session. This option is useful when application developers want to let users choose among different identities. This option drives MSAL to send `prompt=select_account` to the identity provider. This option is the default. It does a good job of providing the best possible experience based on the available information, such as account and presence of a session for the user. Don't change it unless you have good reason to do it.-- `Consent` enables the application developer to force the user to be prompted for consent, even if consent was granted before. In this case, MSAL sends `prompt=consent` to the identity provider. This option can be used in some security-focused applications where the organization governance demands that the user is presented with the consent dialog box each time the application is used.-- `ForceLogin` enables the application developer to have the user prompted for credentials by the service, even if this user prompt might not be needed. This option can be useful to let the user sign in again if acquiring a token fails. In this case, MSAL sends `prompt=login` to the identity provider. Sometimes it's used in security-focused applications where the organization governance demands that the user re-signs in each time they access specific parts of an application.-- `Create` triggers a sign-up experience, which is used for External Identities, by sending `prompt=create` to the identity provider. This prompt should not be sent for Azure AD B2C apps. For more information, see [Add a self-service sign-up user flow to an app](../external-identities/self-service-sign-up-user-flow.md).-- `Never` (for .NET 4.5 and WinRT only) won't prompt the user, but instead tries to use the cookie stored in the hidden embedded web view. For more information, see web views in MSAL.NET. Using this option might fail. In that case, `AcquireTokenInteractive` throws an exception to notify that a UI interaction is needed. You'll need to use another `Prompt` parameter.-- `NoPrompt` won't send any prompt to the identity provider which therefore will decide to present the best sign-in experience to the user (single-sign-on, or select account). This option is also mandatory for Azure Active Directory (Azure AD) B2C edit profile policies. For more information, see [Azure AD B2C specifics](https://aka.ms/msal-net-b2c-specificities).
+ This option drives MSAL to send `prompt=select_account` to the identity provider. It provides the best possible experience based on available information, such as the account and the presence of a session for the user. Don't change it unless you have a good reason.
+- `Consent` enables you to force the user to be prompted for consent, even if the application granted consent before. In this case, MSAL sends `prompt=consent` to the identity provider. You can use this option in some security-focused applications where the organization's governance demands that the consent dialog box appears each time the user opens the application.
+- `ForceLogin` enables you to have the application prompt the user for credentials, even if this user prompt might not be needed. This option can be useful to let the user sign in again if token acquisition fails. In this case, MSAL sends `prompt=login` to the identity provider. Organizations sometimes use this option in security-focused applications where governance demands that users sign in each time they access specific parts of an application.
+- `Create` triggers a sign-up experience for external identities by sending `prompt=create` to the identity provider. Azure Active Directory B2C (Azure AD B2C) apps shouldn't send this prompt. For more information, see [Add a self-service sign-up user flow to an app](../external-identities/self-service-sign-up-user-flow.md).
+- `Never` (for .NET 4.5 and Windows Runtime only) doesn't prompt the user. Instead, it tries to use the cookie stored in the hidden embedded web view.
+
+ Use of this option might fail. In that case, `AcquireTokenInteractive` throws an exception to notify you that you need a UI interaction. Then, use another `Prompt` parameter.
+- `NoPrompt` doesn't send any prompt to the identity provider. The identity provider decides which sign-in experience is best for the user (single sign-on or select account).
+
+ This option is mandatory for editing profile policies in Azure AD B2C. For more information, see [Azure AD B2C specifics](https://aka.ms/msal-net-b2c-specificities).
#### WithUseEmbeddedWebView
var result = await app.AcquireTokenInteractive(scopes)
#### WithExtraScopeToConsent
-This modifier is used in an advanced scenario where you want the user to pre-consent to several resources upfront, and you don't want to use incremental consent, which is normally used with MSAL.NET/the Microsoft identity platform. For more information, see [Have the user consent upfront for several resources](scenario-desktop-production.md#have-the-user-consent-upfront-for-several-resources).
+This modifier is for advanced scenarios where you want the user to consent to several resources up front and you don't want to use incremental consent. Developers normally use incremental consent with MSAL.NET and the Microsoft identity platform. For more information, see [Have the user consent up front for several resources](scenario-desktop-production.md#have-the-user-consent-upfront-for-several-resources).
```csharp var result = await app.AcquireTokenInteractive(scopesForCustomerApi)
var result = await app.AcquireTokenInteractive(scopesForCustomerApi)
#### WithCustomWebUi A web UI is a mechanism to invoke a browser. This mechanism can be a dedicated UI WebBrowser control or a way to delegate opening the browser.
-MSAL provides web UI implementations for most platforms, but there are cases where you might want to host the browser yourself:
+MSAL provides web UI implementations for most platforms, but you might want to host the browser yourself in these cases:
-- Platforms that aren't explicitly covered by MSAL, for example, Blazor, Unity, and Mono on desktops.
+- You have platforms that MSAL doesn't explicitly cover, like Blazor, Unity, and Mono on desktops.
- You want to UI test your application and use an automated browser that can be used with Selenium. - The browser and the app that run MSAL are in separate processes.
-##### At a glance
-
-To achieve this, you give to MSAL `start Url`, which needs to be displayed in a browser of choice so that the end user can enter items such as their username.
-After authentication finishes, your app needs to pass back to MSAL `end Url`, which contains a code provided by Azure AD.
-The host of `end Url` is always `redirectUri`. To intercept `end Url`, do one of the following things:
+To achieve this, you give to MSAL `start Url`, which needs to be displayed in a browser so that users can enter items such as their username. After authentication finishes, your app needs to pass back to MSAL `end Url`, which contains a code that Azure AD provides. The host of `end Url` is always `redirectUri`. To intercept `end Url`, do one of the following things:
- Monitor browser redirects until `redirect Url` is hit.-- Have the browser redirect to a URL, which you monitor.
+- Have the browser redirect to a URL that you monitor.
-##### WithCustomWebUi is an extensibility point
+`WithCustomWebUi` is an extensibility point that you can use to provide your own UI in public client applications. You can also let users go through the `/Authorize` endpoint of the identity provider and let them sign in and consent. MSAL.NET can then redeem the authentication code and get a token.
-`WithCustomWebUi` is an extensibility point that you can use to provide your own UI in public client applications. You can also let the user go through the /Authorize endpoint of the identity provider and let them sign in and consent. MSAL.NET can then redeem the authentication code and get a token. For example, it's used in Visual Studio to have electrons applications (for instance, Visual Studio Feedback) provide the web interaction, but leave it to MSAL.NET to do most of the work. You can also use it if you want to provide UI automation. In public client applications, MSAL.NET uses the Proof Key for Code Exchange (PKCE) standard to ensure that security is respected. Only MSAL.NET can redeem the code. For more information, see [RFC 7636 - Proof Key for Code Exchange by OAuth Public Clients](https://tools.ietf.org/html/rfc7636).
+For example, you can use `WithCustomWebUi` in Visual Studio to have Electron applications (for instance, Visual Studio Feedback) provide the web interaction, but leave it to MSAL.NET to do most of the work. You can also use `WithCustomWebUi` if you want to provide UI automation.
- ```csharp
- using Microsoft.Identity.Client.Extensions;
- ```
+In public client applications, MSAL.NET uses the Proof Key for Code Exchange (PKCE) standard to ensure that security is respected. Only MSAL.NET can redeem the code. For more information, see [RFC 7636 - Proof Key for Code Exchange by OAuth Public Clients](https://tools.ietf.org/html/rfc7636).
-##### Use WithCustomWebUi
+```csharp
+using Microsoft.Identity.Client.Extensions;
+```
-To use `.WithCustomWebUI`, follow these steps.
+##### Use WithCustomWebUI
- 1. Implement the `ICustomWebUi` interface. For more information, see [this website](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/053a98d16596be7e9ca1ab916924e5736e341fe8/src/Microsoft.Identity.Client/Extensibility/ICustomWebUI.cs#L32-L70). Implement one `AcquireAuthorizationCodeAsync`method and accept the authorization code URL computed by MSAL.NET. Then let the user go through the interaction with the identity provider and return back the URL by which the identity provider would have called your implementation back along with the authorization code. If you have issues, your implementation should throw a `MsalExtensionException` exception to nicely cooperate with MSAL.
- 2. In your `AcquireTokenInteractive` call, use the `.WithCustomUI()` modifier passing the instance of your custom web UI.
+To use `WithCustomWebUI`, follow these steps:
- ```csharp
- result = await app.AcquireTokenInteractive(scopes)
- .WithCustomWebUi(yourCustomWebUI)
- .ExecuteAsync();
- ```
+1. Implement the `ICustomWebUi` interface. For more information, see [this GitHub page](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/053a98d16596be7e9ca1ab916924e5736e341fe8/src/Microsoft.Identity.Client/Extensibility/ICustomWebUI.cs#L32-L70).
+1. Implement one `AcquireAuthorizationCodeAsync`method and accept the authorization code URL that MSAL.NET computes.
+1. Let the user go through the interaction with the identity provider and return the URL that the identity provider used to call back your implementation, along with the authorization code. If you have problems, your implementation should throw an `MsalExtensionException` exception to cooperate with MSAL.
+1. In your `AcquireTokenInteractive` call, use the `.WithCustomUI()` modifier by passing the instance of your custom web UI:
-##### Examples of implementation of ICustomWebUi in test automation: SeleniumWebUI
+ ```csharp
+ result = await app.AcquireTokenInteractive(scopes)
+ .WithCustomWebUi(yourCustomWebUI)
+ .ExecuteAsync();
+ ```
-The MSAL.NET team has rewritten the UI tests to use this extensibility mechanism. If you're interested, look at the [SeleniumWebUI](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/053a98d16596be7e9ca1ab916924e5736e341fe8/tests/Microsoft.Identity.Test.Integration/Infrastructure/SeleniumWebUI.cs#L15-L160) class in the MSAL.NET source code.
+The MSAL.NET team has rewritten the UI tests to use this extensibility mechanism. If you're interested, view the [SeleniumWebUI](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/053a98d16596be7e9ca1ab916924e5736e341fe8/tests/Microsoft.Identity.Test.Integration/Infrastructure/SeleniumWebUI.cs#L15-L160) class in the MSAL.NET source code.
##### Provide a great experience with SystemWebViewOptions From MSAL.NET 4.1 [`SystemWebViewOptions`](/dotnet/api/microsoft.identity.client.systemwebviewoptions), you can specify: -- The URI to go to (`BrowserRedirectError`) or the HTML fragment to display (`HtmlMessageError`) in case of sign-in or consent errors in the system web browser.-- The URI to go to (`BrowserRedirectSuccess`) or the HTML fragment to display (`HtmlMessageSuccess`) in case of successful sign-in or consent.-- The action to run to start the system browser. You can provide your own implementation by setting the `OpenBrowserAsync` delegate. The class also provides a default implementation for two browsers: `OpenWithEdgeBrowserAsync` and `OpenWithChromeEdgeBrowserAsync` for Microsoft Edge and [Microsoft Edge on Chromium](https://www.windowscentral.com/faq-edge-chromium), respectively.
+- The URI to go to (`BrowserRedirectError`) or the HTML fragment to display (`HtmlMessageError`) if sign-in or consent errors appear in the system web browser.
+- The URI to go to (`BrowserRedirectSuccess`) or the HTML fragment to display (`HtmlMessageSuccess`) if sign-in or consent is successful.
+- The action to run to start the system browser. You can provide your own implementation by setting the `OpenBrowserAsync` delegate. The class also provides a default implementation for two browsers: `OpenWithEdgeBrowserAsync` for Microsoft Edge and `OpenWithChromeEdgeBrowserAsync` for [Microsoft Edge on Chromium](https://www.windowscentral.com/faq-edge-chromium).
To use this structure, write something like the following example:
var result = app.AcquireTokenInteractive(scopes)
#### Other optional parameters
-To learn more about all the other optional parameters for `AcquireTokenInteractive`, see [AcquireTokenInteractiveParameterBuilder](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder#methods).
+To learn about the other optional parameters for `AcquireTokenInteractive`, see [AcquireTokenInteractiveParameterBuilder](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder#methods).
# [Java](#tab/java) ```java private static IAuthenticationResult acquireTokenInteractive() throws Exception {
- // Load token cache from file and initialize token cache aspect. The token cache will have
+ // Load the token cache from the file and initialize the token cache aspect. The token cache will have
// dummy data, so the acquireTokenSilently call will fail. TokenCacheAspect tokenCacheAspect = new TokenCacheAspect("sample_cache.json");
private static IAuthenticationResult acquireTokenInteractive() throws Exception
.build(); Set<IAccount> accountsInCache = pca.getAccounts().join();
- // Take first account in the cache. In a production application, you would filter
- // accountsInCache to get the right account for the user authenticating.
+ // Take the first account in the cache. In a production application, you would filter
+ // accountsInCache to get the right account for the user who is authenticating.
IAccount account = accountsInCache.iterator().next(); IAuthenticationResult result;
private static IAuthenticationResult acquireTokenInteractive() throws Exception
.builder(SCOPE, account) .build();
- // try to acquire token silently. This call will fail since the token cache
- // does not have any data for the user you are trying to acquire a token for
+ // try to acquire the token silently. This call will fail because the token cache
+ // does not have any data for the user you're trying to acquire a token for
result = pca.acquireTokenSilently(silentParameters).join(); } catch (Exception ex) { if (ex.getCause() instanceof MsalException) {
private static IAuthenticationResult acquireTokenInteractive() throws Exception
.scopes(SCOPE) .build();
- // Try to acquire a token interactively with system browser. If successful, you should see
- // the token and account information printed out to console
+ // Try to acquire a token interactively with the system browser. If successful, you should see
+ // the token and account information printed out to the console
result = pca.acquireToken(parameters).join(); } else { // Handle other exceptions accordingly
private static IAuthenticationResult acquireTokenInteractive() throws Exception
# [macOS](#tab/macOS)
-### In MSAL for iOS and macOS
-
-Objective-C:
+### Code in MSAL for iOS and macOS
```objc MSALInteractiveTokenParameters *interactiveParams = [[MSALInteractiveTokenParameters alloc] initWithScopes:scopes webviewParameters:[MSALWebviewParameters new]];
MSALInteractiveTokenParameters *interactiveParams = [[MSALInteractiveTokenParame
}]; ```
-Swift:
- ```swift let interactiveParameters = MSALInteractiveTokenParameters(scopes: scopes, webviewParameters: MSALWebviewParameters()) application.acquireToken(with: interactiveParameters, completionBlock: { (result, error) in
application.acquireToken(with: interactiveParameters, completionBlock: { (result
return }
- // Get access token from result
+ // Get the access token from the result
let accessToken = authResult.accessToken }) ``` # [Node.js](#tab/nodejs)
-In MSAL Node, you acquire tokens via authorization code flow with Proof Key for Code Exchange (PKCE). The process has two steps: first, the application obtains a URL that can be used to generate an authorization code. This URL can be opened in a browser of choice, where the user can input their credentials, and will be redirected back to the `redirectUri` (registered during the app registration) with an authorization code. Second, the application passes the authorization code received to the `acquireTokenByCode()` method which exchanges it for an access token.
+In MSAL Node, you acquire tokens via authorization code flow with Proof Key for Code Exchange (PKCE). The process has two steps:
+
+1. The application obtains a URL that can be used to generate an authorization code. Users can open the URL in a browser and enter their credentials. They're then redirected back to `redirectUri` (registered during the app registration) with an authorization code.
+1. The application passes the received authorization code to the `acquireTokenByCode()` method, which exchanges it for an access token.
```javascript const msal = require("@azure/msal-node");
const {verifier, challenge} = await msal.cryptoProvider.generatePkceCodes();
const authCodeUrlParameters = { scopes: ["User.Read"], redirectUri: "your_redirect_uri",
- codeChallenge: challenge, // PKCE Code Challenge
- codeChallengeMethod: "S256" // PKCE Code Challenge Method
+ codeChallenge: challenge, // PKCE code challenge
+ codeChallengeMethod: "S256" // PKCE code challenge method
};
-// get url to sign user in and consent to scopes needed for application
+// Get the URL to sign in the user and consent to scopes needed for the application
pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => { console.log(response); const tokenRequest = { code: response["authorization_code"],
- codeVerifier: verifier // PKCE Code Verifier
+ codeVerifier: verifier // PKCE code verifier
redirectUri: "your_redirect_uri", scopes: ["User.Read"], };
- // acquire a token by exchanging the code
+ // Acquire a token by exchanging the code
pca.acquireTokenByCode(tokenRequest).then((response) => { console.log("\nResponse: \n:", response); }).catch((error) => {
pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
# [Python](#tab/python)
-MSAL Python 1.7+ provides an interactive acquire token method.
+MSAL Python 1.7+ provides an interactive method for acquiring a token:
```python result = None
-# Firstly, check the cache to see if this end user has signed in before
+# Check the cache to see if this user has signed in before
accounts = app.get_accounts(username=config["username"]) if accounts: result = app.acquire_token_silent(config["scope"], account=accounts[0])
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Title: Acquire a token to call a web API using web account manager (desktop app)
-description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using web account manager
+ Title: Acquire a token to call a web API by using Web Account Manager (desktop app)
+description: Learn how to build a desktop app that calls web APIs to acquire a token for the app by using Web Account Manager.
Last updated 12/14/2022
-#Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
+#Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform for developers.
-# Desktop app that calls web APIs: Acquire a token using WAM
+# Desktop app that calls web APIs: Acquire a token by using WAM
-MSAL is able to call Web Account Manager (WAM), a Windows 10+ component that ships with the OS. This component acts as an authentication broker and users of your app benefit from integration with accounts known from Windows, such as the account you signed-in with in your Windows session.
+The Microsoft Authentication Library (MSAL) calls Web Account Manager (WAM), a Windows 10+ component that acts as an authentication broker.
## WAM value proposition
-Using an authentication broker such as WAM has numerous benefits.
+Using an authentication broker such as WAM has numerous benefits:
-- Enhanced security. See [token protection](https://learn.microsoft.com/azure/active-directory/conditional-access/concept-token-protection)-- Better support for Windows Hello, Conditional Access and FIDO keys-- Integration with Windows' "Email and Accounts" view-- Better Single Sign-On -- Ability to sign in silently with the current Windows account-- Most bug fixes and enhancements will be shipped with Windows
+- Enhanced security. See [Token protection](/azure/active-directory/conditional-access/concept-token-protection).
+- Support for Windows Hello, conditional access, and FIDO keys.
+- Integration with the Windows **Email & accounts** view.
+- Fast single sign-on.
+- Ability to sign in silently with the current Windows account.
+- Bug fixes and enhancements shipped with Windows.
## WAM limitations -- Available on Windows 10 and later and on Windows Server 2019 and later. On Mac, Linux, and earlier versions of Windows, MSAL will automatically fall back to a browser.-- B2C and ADFS authorities aren't supported. MSAL will fall back to a browser.
+- WAM is available on Windows 10 and later, and on Windows Server 2019 and later. On Mac, Linux, and earlier versions of Windows, MSAL automatically falls back to a browser.
+- Azure Active Directory B2C (Azure AD B2C) and Active Directory Federation Services (AD FS) authorities aren't supported. MSAL falls back to a browser.
## WAM integration package
-Most apps will need to reference `Microsoft.Identity.Client.Broker` package to use this integration. MAUI apps are not required to do this; the functionality is inside MSAL when the target is `net6-windows` and later.
+Most apps need to reference the `Microsoft.Identity.Client.Broker` package to use this integration. .NET MAUI apps don't have to do this, because the functionality is inside MSAL when the target is `net6-windows` and later.
## WAM calling pattern
-You can use the following pattern to use WAM.
+You can use the following pattern for WAM:
```csharp // 1. Configuration - read below about redirect URI
You can use the following pattern to use WAM.
.WithBroker(new BrokerOptions(BrokerOptions.OperatingSystems.Windows)) .Build();
- // Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+ // Add a token cache; see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
- // 2. Find a account for silent login
+ // 2. Find an account for silent login
- // is there an account in the cache?
+ // Is there an account in the cache?
IAccount accountToLogin = (await pca.GetAccountsAsync()).FirstOrDefault(); if (accountToLogin == null) {
- // 3. no account in the cache, try to login with the OS account
+ // 3. No account in the cache; try to log in with the OS account
accountToLogin = PublicClientApplication.OperatingSystemAccount; }
You can use the following pattern to use WAM.
var authResult = await pca.AcquireTokenSilent(new[] { "User.Read" }, accountToLogin) .ExecuteAsync(); }
- // cannot login silently - most likely AAD would like to show a consent dialog or the user needs to re-enter credentials
+ // Cannot log in silently - most likely Azure AD would show a consent dialog or the user needs to re-enter credentials
catch (MsalUiRequiredException) { // 5. Interactive authentication var authResult = await pca.AcquireTokenInteractive(new[] { "User.Read" }) .WithAccount(accountToLogin)
- // this is mandatory so that WAM is correctly parented to your app, read on for more guidance
+ // This is mandatory so that WAM is correctly parented to your app; read on for more guidance
.WithParentActivityOrWindow(myWindowHandle) .ExecuteAsync();
- // consider allowing the user to re-authenticate with a different account, by calling AcquireTokenInteractive again
+ // Consider allowing the user to re-authenticate with a different account, by calling AcquireTokenInteractive again
} ```
-If a broker isn't present (for example, Win8.1, Mac, or Linux), then MSAL will fall back to a browser, where redirect URI rules apply.
+If a broker isn't present (for example, Windows 8.1, Mac, or Linux), MSAL falls back to a browser, where redirect URI rules apply.
### Redirect URI
-WAM redirect URIs don't need to be configured in MSAL, but they must be configured in the app registration.
+You don't need to configure WAM redirect URIs in MSAL, but you do need to configure them in the app registration:
``` ms-appx-web://microsoft.aad.brokerplugin/{client_id}
ms-appx-web://microsoft.aad.brokerplugin/{client_id}
### Token cache persistence
-It's important to persist MSAL's token cache because MSAL continues to store id tokens and account metadata there. See https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+It's important to persist the MSAL token cache because MSAL continues to store ID tokens and account metadata there. For more information, see [Token cache serialization in MSAL.NET](/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop).
-### Find a account for silent login
+### Account for silent login
-The recommended pattern is:
+To find an account for silent login, we recommend this pattern:
-1. If the user previously logged in, use that account.
-2. If not, use `PublicClientApplication.OperatingSystemAccount` which the current Windows Account
-3. Allow the end-user to change to a different account by logging in interactively.
+- If the user previously logged in, use that account. If not, use `PublicClientApplication.OperatingSystemAccount` for the current Windows account.
+- Allow the user to change to a different account by logging in interactively.
-## Parent Window Handles
+## Parent window handles
-It is required to configure MSAL with the window that the interactive experience should be parented to, using `WithParentActivityOrWindow` APIs.
+You must configure MSAL with the window that the interactive experience should be parented to, by using `WithParentActivityOrWindow` APIs.
### UI applications
-For UI apps like WinForms, WPF, WinUI3 see https://learn.microsoft.com/windows/apps/develop/ui-input/retrieve-hwnd
+
+For UI apps like Windows Forms (WinForms), Windows Presentation Foundation (WPF), or Windows UI Library version 3 (WinUI3), see [Retrieve a window handle](/windows/apps/develop/ui-input/retrieve-hwnd).
### Console applications
-For console applications it is a bit more involved, because of the terminal window and its tabs. Use the following code:
+For console applications, the configuration is more involved because of the terminal window and its tabs. Use the following code:
```csharp enum GetAncestorFlags
enum GetAncestorFlags
/// <summary> /// Retrieves the handle to the ancestor of the specified window. /// </summary>
-/// <param name="hwnd">A handle to the window whose ancestor is to be retrieved.
+/// <param name="hwnd">A handle to the window whose ancestor will be retrieved.
/// If this parameter is the desktop window, the function returns NULL. </param> /// <param name="flags">The ancestor to be retrieved.</param> /// <returns>The return value is the handle to the ancestor window.</returns>
public IntPtr GetConsoleOrTerminalWindow()
### "WAM Account Picker did not return an account" error message
-This message indicates that either the application user closed the dialog that displays accounts, or the dialog itself crashed. A crash might occur if AccountsControl, a Windows control, is registered incorrectly in Windows. To resolve this issue:
+The "WAM Account Picker did not return an account" message indicates that either the application user closed the dialog that displays accounts, or the dialog itself crashed. A crash might occur if `AccountsControl`, a Windows control, is registered incorrectly in Windows. To resolve this problem:
-1. In the taskbar, right-click **Start**, and then select **Windows PowerShell (Admin)**.
-1. If you're prompted by a User Account Control (UAC) dialog, select **Yes** to start PowerShell.
+1. On the taskbar, right-click **Start**, and then select **Windows PowerShell (Admin)**.
+1. If you're prompted by a User Account Control dialog, select **Yes** to start PowerShell.
1. Copy and then run the following script: ```powershell if (-not (Get-AppxPackage Microsoft.AccountsControl)) { Add-AppxPackage -Register "$env:windir\SystemApps\Microsoft.AccountsControl_cw5n1h2txyewy\AppxManifest.xml" -DisableDevelopmentMode -ForceApplicationShutdown } Get-AppxPackage Microsoft.AccountsControl ```
-### "MsalClientException: ErrorCode: wam_runtime_init_failed" error message during Single-file deployment
-You may see the following error when packaging your application into a [single file bundle](/dotnet/core/deploying/single-file/overview).
+### "MsalClientException: ErrorCode: wam_runtime_init_failed" error message during a single file deployment
+
+You might see the following error when packaging your application into a [single file bundle](/dotnet/core/deploying/single-file/overview):
``` MsalClientException: wam_runtime_init_failed: The type initializer for 'Microsoft.Identity.Client.NativeInterop.API' threw an exception. See https://aka.ms/msal-net-wam#troubleshooting ```
-This error indicates that the native binaries from the [Microsoft.Identity.Client.NativeInterop](https://www.nuget.org/packages/Microsoft.Identity.Client.NativeInterop/) were not packaged into the single file bundle. To embed those files for extraction and get one output file, set the property IncludeNativeLibrariesForSelfExtract to true. Read more about [how to package native binaries into a single file](/dotnet/core/deploying/single-file/overview?tabs=cli#native-libraries).
+This error indicates that the native binaries from [Microsoft.Identity.Client.NativeInterop](https://www.nuget.org/packages/Microsoft.Identity.Client.NativeInterop/) were not packaged into the single file bundle. To embed those files for extraction and get one output file, set the property `IncludeNativeLibrariesForSelfExtract` to `true`. [Read more about how to package native binaries into a single file](/dotnet/core/deploying/single-file/overview?tabs=cli#native-libraries).
-### Connection issues
-
-The application user sees an error message similar to "Please check your connection and try again." If this issue occurs regularly, see the [troubleshooting guide for Office](/microsoft-365/troubleshoot/authentication/connection-issue-when-sign-in-office-2016), which also uses the broker.
+### Connection problems
+If the application user regularly sees an error message that's similar to "Please check your connection and try again," see the [troubleshooting guide for Office](/microsoft-365/troubleshoot/authentication/connection-issue-when-sign-in-office-2016). That troubleshooting guide also uses the broker.
## Sample
-[WPF sample that uses WAM](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2)
+You can find a WPF sample that uses WAM [on GitHub](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2).
- ## Next steps Move on to the next article in this scenario,
active-directory Groups Write Back Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-write-back-portal.md
You can use PowerShell to get a list of writeback enabled group using the follow
```powershell-console Connect-MgGraph -Scopes @('Group.Read.all') Select-MgProfile -Name beta
-PS D:\> Get-MgGroup -All |Where-Object {$_.AdditionalProperties.writebackConfiguration.isEnabled -Like $true} |Select-Object Displayname,@{N="WriteBackEnabled";E={$_.AdditionalProperties.writebackConfiguration.isEnabled}}
+PS D:\> Get-MgGroup -All |Where-Object {$_.writebackConfiguration.isEnabled -Like $true} |Select-Object Displayname,@{N="WriteBackEnabled";E={$_.writebackConfiguration.isEnabled}}
DisplayName WriteBackEnabled -- -
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
No, at this point it isn't possible to keep your tenant's DID after you have opt
The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure App Service](../../app-service/overview.md) and run it in the cloud. The following links help you deploy the respective sample to Azure App Service. The Free pricing tier will be sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure App Service instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it. -- Dotnet - [Publish to App Service](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#publish-your-web-app)
+- Dotnet - [Publish to App Service](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#2-publish-your-web-app)
- Node - [Deploy to App Service](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode#deploy-to-azure) - Java - [Deploy to App Service](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure App Service to the sample.-- Python - [Deploy using VSCode](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
+- Python - [Deploy using Visual Studio Code](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
Regardless of which language of the sample you are using, they will pickup the Azure AppService hostname `https://something.azurewebsites.net` and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window shows you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Provision Azure NetApp Files volumes on Azure Kubernetes Service
description: Learn how to provision Azure NetApp Files volumes on an Azure Kubernetes Service cluster. Previously updated : 04/18/2023 Last updated : 05/07/2023 # Provision Azure NetApp Files volumes on Azure Kubernetes Service
The following command creates an account named *myaccount1* in the *myResourceGr
> [!NOTE] > This subnet must be in the same virtual network as your AKS cluster.
+ > Ensure that the `address-prefixes` are set correctly and without any conflicts
```azurecli-interactive RESOURCE_GROUP=myResourceGroup
The following command creates an account named *myaccount1* in the *myResourceGr
--vnet-name $VNET_NAME \ --name $SUBNET_NAME \ --delegations "Microsoft.NetApp/volumes" \
- --address-prefixes 10.0.0.0/28
+ --address-prefixes 10.225.0.0/24
``` Volumes can either be provisioned statically or dynamically. Both options are covered further in the next sections.
This section walks you through the installation of Astra Trident using the opera
kubectl create ns trident ```
- The output of the command resembles the following example:
-
- ```output
- namespace/trident created
- ```
- 2. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file: - For AKS cluster version less than 1.25, run following command:
This section walks you through the installation of Astra Trident using the opera
secret/backend-tbc-anf-secret created tridentbackendconfig.trident.netapp.io/backend-tbc-anf created ```
+
+ 3. To confirm backend was set with correct credentials and sufficient permissions, run the following [kubectl describe][kubectl-describe] command:
+ ```bash
+ kubectl describe tridentbackendconfig.trident.netapp.io/backend-tbc-anf -n trident
+ ```
### Create a StorageClass
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
+
+ Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+ Last updated : 05/04/2023+++
+# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+This article addresses upgrade experiences for Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+## How Istio components are upgraded
+
+**Minor version:** Currently the Istio add-on only has minor version 1.17 available. Minor version upgrade experiences are planned for when newer versions of Istio (1.18) are introduced.
+
+**Patch version:**
+
+* Istio add-on patch version availability information is published in [AKS weekly release notes][aks-release-notes].
+* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases.
+* User needs to initiate patches to Istio proxy in their workloads by restarting the pods for reinjection:
+ * Check the version of the Istio proxy intended for new or restarted pods. This version is the same as the version of the istiod and Istio ingress pods after they were patched:
+
+ ```bash
+ kubectl get cm -n aks-istio-system -o yaml | grep "mcr.microsoft.com\/oss\/istio\/proxyv2"
+ ```
+
+ Example output:
+
+ ```bash
+ "image": "mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless",
+ "image": "mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless"
+ ```
+
+ * Check the Istio proxy image version for all pods in a namespace:
+
+ ```bash
+ kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
+ sort |\
+ grep "mcr.microsoft.com\/oss\/istio\/proxyv2"
+ ```
+
+ Example output:
+
+ ```bash
+ productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.1-distroless
+ ```
+
+ * Restart the workloads to trigger reinjection. For example:
+
+ ```bash
+ kubectl rollout restart deployments/productpage-v1 -n default
+ ```
+
+ * To verify that they're now on the newer versions, check the Istio proxy image version again for all pods in the namespace:
+
+ ```bash
+ kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
+ sort |\
+ grep "mcr.microsoft.com\/oss\/istio\/proxyv2"
+ ```
+
+ Example output:
+
+ ```bash
+ productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless
+ ```
+
+[aks-release-notes]: https://github.com/Azure/AKS/releases
aks Monitor Apiserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-apiserver.md
+
+ Title: Monitor and query Azure Kubernetes Service (AKS) apiserver requests
+description: This article describes how to monitor Azure Kubernetes Service (AKS) kube-audit to query the various types of requests to apiserver.
+ Last updated : 05/08/2023+++
+# Monitor and query Azure Kubernetes Service (AKS) apiserver requests
+
+This article describes how to use [Azure Diagnostics][azure-diagnostics-overview] to enable logging for [kube-audit][kube-audit-overview] events generated by user and application activities by the [kube-apiserver][kube-apiserver-overview] component. Audit events written to the Kubernetes Audit backend are collected and forwarded to your Log Analytics workspace, where you can integrate them into queries, alerts, and visualizations with existing log data.
+
+## Before you begin
+
+* A [Log Analytics workspace][log-analytics-workspace-overview]. If you already have a cluster monitored by [Container insights][container-insights-overview], consider using that one. For more information, see [Designing your Azure Monitor Logs deployment][design-log-analytics-deployment].
+
+## Collect Kubernetes audit logs
+
+Kubernetes audit logging isn't enabled by default on an AKS cluster on account of Microsoft manages the AKS control plane. You can create diagnostic settings for your cluster resource using any one of the multiple methods described in the [Create diagnostic settings][create-diagnostic settings] article. While configuring diagnostic settings, specify the following:
+
+* **Logs and metrics to route:** For logs, choose the category **Kubernetes Audit** to send to the destination specified later.
+* **Destination details:** Select the checkbox for **Log Analytics**.
+
+> [!NOTE]
+> There could be substantial cost involved once kube-audit logs are enabled. Consider disabling kube-audit logging when not required.
+> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor][cost-optimization-azure-monitor].
+
+After a few moments, the new setting appears in your list of settings for this resource. Logs are streamed to the specified destinations as new event data is generated. It might take up to 15 minutes between when an event is emitted and when it appears in a [Log Analytics workspace][log-analytics-workspace-overview].
+
+After creating the diagnostic setting to collect kube-audit events, the data can be queried from the [AzureDiagnostics][azure-diagnostics-table] table.
+
+## Query the apiserver requests
+
+It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample query.
+
+```kusto
+let starttime = datetime("2023-02-23");
+let endtime = datetime("2023-02-24");
+AzureDiagnostics
+| where TimeGenerated between(starttime..endtime)
+| where Category == "kube-audit"
+| extend event = parse_json(log_s)
+| extend HttpMethod = tostring(event.verb)
+| extend User = tostring(event.user.username)
+| extend Apiserver = pod_s
+| extend SourceIP = tostring(event.sourceIPs[0])
+| project TimeGenerated, Category, HttpMethod, User, Apiserver, SourceIP, OperationName, event
+```
+
+## Next steps
+
+For more information about AKS metrics, logs, and other important values, see [Monitoring AKS data reference][monitoring-aks-data-reference].
+
+<!-- LINKS - external -->
+[kube-audit-overview]: https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/
+[kube-apiserver-overview]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
+
+<!-- LINKS - internal -->
+[azure-diagnostics-overview]: ../azure-monitor/essentials/diagnostic-settings.md
+[log-analytics-workspace-overview]: ../azure-monitor/logs/log-analytics-workspace-overview.md
+[design-log-analytics-deployment]: ../azure-monitor/logs/design-logs-deployment.md
+[create-diagnostic settings]: ../azure-monitor/essentials/diagnostic-settings.md#create-diagnostic-settings
+[cost-optimization-azure-monitor]: ../azure-monitor/best-practices-cost.md
+[azure-diagnostics-table]: /azure/azure-monitor/reference/tables/azurediagnostics
+[container-insights-overview]: ..//azure-monitor/containers/container-insights-overview.md
+[monitoring-aks-data-reference]: monitor-aks-reference.md
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the
> > |Kubernetes version | OSM version installed | > ||--|
-> | 1.24.0 or greater | 1.2.3 |
+> | 1.24.0 or greater | 1.2.4 |
> | Between 1.23.5 and 1.24.0 | 1.1.3 | > | Below 1.23.5 | 1.0.0 |
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
This article will discuss how to download the OSM client library to be used to o
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.4* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an A
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.4* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.4* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
spec:
- 169.254.169.254/32 ```
-> [!NOTE]
-> Alternatively you can use [Pod Identity](./use-azure-ad-pod-identity.md) though this is in Public Preview. It has a pod (NMI) that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the Azure Instance Metadata Service on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
->
- ## Secure container access to resources > **Best practice guidance**
api-management Api Management In Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-in-workspace.md
This article is an introduction to managing APIs, products, subscriptions, and o
> [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations).
-> * This feature is being released during March and April 2023.
> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later. > * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
api-management How To Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-create-workspace.md
Set up a [workspace](workspaces-overview.md) (preview) to enable a decentralized
> [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations).
-> * This feature is being released during March and April 2023.
> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later. > * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
The following are virtual network resource requirements for API Management. Some
* An Azure Resource Manager virtual network is required. * You must provide a Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) in addition to specifying a virtual network and subnet. * The subnet used to connect to the API Management instance may contain other Azure resource types.
+* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
* A [network security group](../virtual-network/network-security-groups-overview.md) attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic. * The API Management service, virtual network and subnet, and public IP address resource must be in the same region and subscription. * For multi-region API Management deployments, configure virtual network resources separately for each location.
The following are virtual network resource requirements for API Management. Some
* An Azure Resource Manager virtual network is required. * The subnet used to connect to the API Management instance must be dedicated to API Management. It can't contain other Azure resource types.
+* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
* The API Management service, virtual network, and subnet resources must be in the same region and subscription. * For multi-region API Management deployments, configure virtual network resources separately for each location.
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
In API Management, *workspaces* allow decentralized API development teams to man
> [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](#preview-limitations).
-> * This feature is being released during March and April 2023.
> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later. > * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
app-service Deploy Complex Application Predictably https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-complex-application-predictably.md
- Title: Deploy apps predictably with ARM
-description: Learn how to deploy multiple Azure App Service apps as a single unit and in a predictable manner using Azure Resource Manager templates and PowerShell scripting.
-- Previously updated : 01/06/2016--
-# Provision and deploy microservices predictably in Azure
-This tutorial shows how to provision and deploy an application composed of [microservices](https://en.wikipedia.org/wiki/Microservices) in [Azure App Service](https://azure.microsoft.com/services/app-service/) as a single unit and in a predictable manner using JSON resource group templates and PowerShell scripting.
-
-When provisioning and deploying high-scale applications that are composed of highly decoupled microservices, repeatability and predictability are crucial to success. [Azure App Service](https://azure.microsoft.com/services/app-service/) enables you to create microservices that include web apps, mobile back ends, and API apps. [Azure Resource Manager](../azure-resource-manager/management/overview.md) enables you to manage all the microservices as a unit, together with resource dependencies such as database and source control settings. Now, you can also deploy such an application using JSON templates and simple PowerShell scripting.
-
-## What you will do
-In the tutorial, you will deploy an application that includes:
-
-* Two App Service apps (i.e. two microservices)
-* A backend SQL Database
-* App settings, connection strings, and source control
-* Application insights, alerts, autoscaling settings
-
-## Tools you will use
-In this tutorial, you will use the following tools. Since itΓÇÖs not comprehensive discussion on tools, IΓÇÖm going to stick to the end-to-end scenario and just give you a brief intro to each, and where you can find more information on it.
-
-### Azure Resource Manager templates (JSON)
-Every time you create an app in Azure App Service, for example, Azure Resource Manager uses a JSON template to create the entire resource group with the component resources. A complex template from the [Azure Marketplace](../marketplace/index.yml) can include the database, storage accounts, the App Service plan, the app itself, alert rules, app settings, autoscale settings, and more, and all these templates are available to you through PowerShell. For more information on the Azure Resource Manager templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md)
-
-### Azure SDK 2.6 for Visual Studio
-The newest SDK contains improvements to the Resource Manager template support in the JSON editor. You can use this to quickly create a resource group template from scratch or open an existing JSON template (such as a downloaded gallery template) for modification, populate the parameters file, and even deploy the resource group directly from an Azure Resource Group solution.
-
-For more information, see [Azure SDK 2.6 for Visual Studio](https://azure.microsoft.com/blog/2015/04/29/announcing-the-azure-sdk-2-6-for-net/).
-
-### Azure PowerShell 0.8.0 or later
-Beginning in version 0.8.0, the Azure PowerShell installation includes the Azure Resource Manager module in addition to the Azure module. This new module enables you to script the deployment of resource groups.
-
-For more information, see [Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md)
-
-### Azure Resource Explorer
-This [preview tool](https://resources.azure.com) enables you to explore the JSON definitions of all the resource groups in your subscription and the individual resources. In the tool, you can edit the JSON definitions of a resource, delete an entire hierarchy of resources, and create new resources. The information readily available in this tool is very helpful for template authoring because it shows you what properties you need to set for a particular type of resource, the correct values, etc. You can even create your resource group in the [Azure Portal](https://portal.azure.com/), then inspect its JSON definitions in the explorer tool to help you templatize the resource group.
-
-### Deploy to Azure button
-If you use GitHub for source control, you can put a [Deploy to Azure button](../azure-resource-manager/templates/deploy-to-azure-button.md) into your README.MD, which enables a turn-key deployment UI to Azure. While you can do this for any simple app, you can extend this to enable deploying an entire resource group by putting an azuredeploy.json file in the repository root. This JSON file, which contains the resource group template, will be used by the Deploy to Azure button to create the resource group. For an example, see the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) sample, which you will use in this tutorial.
-
-## Get the sample resource group template
-So now letΓÇÖs get right to it.
-
-1. Navigate to the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) App Service sample.
-2. In readme.md, click **Deploy to Azure**.
-3. YouΓÇÖre taken to the [deploy-to-azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-appservice-samples%2FToDoApp%2Fmaster%2Fazuredeploy.json) site and asked to input deployment parameters. Notice that most of the fields are populated with the repository name and some random strings for you. You can change all the fields if you want, but the only things you have to enter are the SQL Server administrative login and the password, then click **Next**.
-
- ![Shows the input deployment parameters on the deploy-to-azure site.](./media/app-service-deploy-complex-application-predictably/gettemplate-1-deploybuttonui.png)
-4. Next, click **Deploy** to start the deployment process. Once the process runs to completion, click the http://todoapp*XXXX*.azurewebsites.net link to browse the deployed application.
-
- ![Shows your application's deployment process.](./media/app-service-deploy-complex-application-predictably/gettemplate-2-deployprogress.png)
-
- The UI would be a little slow when you first browse to it because the apps are just starting up, but convince yourself that itΓÇÖs a fully-functional application.
-5. Back in the Deploy page, click the **Manage** link to see the new application in the Azure Portal.
-6. In the **Essentials** dropdown, click the resource group link. Note also that the app is already connected to the GitHub repository under **External Project**.
-
- ![Shows the Resource group link in the Essentials dropdown section.](./media/app-service-deploy-complex-application-predictably/gettemplate-3-portalresourcegroup.png)
-7. In the resource group blade, note that there are already two apps and one SQL Database in the resource group.
-
- ![Shows the resources available in your resource group.](./media/app-service-deploy-complex-application-predictably/gettemplate-4-portalresourcegroupclicked.png)
-
-Everything that you just saw in a few short minutes is a fully deployed two-microservice application, with all the components, dependencies, settings, database, and continuous publishing, set up by an automated orchestration in Azure Resource Manager. All this was done by two things:
-
-* The Deploy to Azure button
-* azuredeploy.json in the repo root
-
-You can deploy this same application tens, hundreds, or thousands of times and have the exact same configuration every time. The repeatability and the predictability of this approach enables you to deploy high-scale applications with ease and confidence.
-
-## Examine (or edit) AZUREDEPLOY.JSON
-Now letΓÇÖs look at how the GitHub repository was set up. You will be using the JSON editor in the Azure .NET SDK, so if you havenΓÇÖt already installed [Azure .NET SDK 2.6](https://azure.microsoft.com/downloads/), do it now.
-
-1. Clone the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) repository using your favorite git tool. In the screenshot below, IΓÇÖm doing this in the Team Explorer in Visual Studio 2013.
-
- ![Shows how to use a git tool to clone the ToDoApp repository.](./media/app-service-deploy-complex-application-predictably/examinejson-1-vsclone.png)
-2. From the repository root, open azuredeploy.json in Visual Studio. If you donΓÇÖt see the JSON Outline pane, you need to install Azure .NET SDK.
-
- ![Shows the JSON Outline pane in Visual Studio.](./media/app-service-deploy-complex-application-predictably/examinejson-2-vsjsoneditor.png)
-
-IΓÇÖm not going to describe every detail of the JSON format, but the [More Resources](#resources) section has links for learning the resource group template language. Here, IΓÇÖm just going to show you the interesting features that can help you get started in making your own custom template for app deployment.
-
-### Parameters
-Take a look at the parameters section to see that most of these parameters are what the **Deploy to Azure** button prompts you to input. The site behind the **Deploy to Azure** button populates the input UI using the parameters defined in azuredeploy.json. These parameters are used throughout the resource definitions, such as resource names, property values, etc.
-
-### Resources
-In the resources node, you can see that 4 top-level resources are defined, including a SQL Server instance, an App Service plan, and two apps.
-
-#### App Service plan
-LetΓÇÖs start with a simple root-level resource in the JSON. In the JSON Outline, click the App Service plan named **[hostingPlanName]** to highlight the corresponding JSON code.
-
-![Shows the [hostingPlanName] section of the JSON code.](./media/app-service-deploy-complex-application-predictably/examinejson-3-appserviceplan.png)
-
-Note that the `type` element specifies the string for an App Service plan (it was called a server farm a long, long time ago), and other elements and properties are filled in using the parameters defined in the JSON file, and this resource doesnΓÇÖt have any nested resources.
-
-> [!NOTE]
-> Note also that the value of `apiVersion` tells Azure which version of the REST API to use the JSON resource definition with, and it can affect how the resource should be formatted inside the `{}`.
->
->
-
-#### SQL Server
-Next, click on the SQL Server resource named **SQLServer** in the JSON Outline.
-
-![Shows the SQL Server resource named SQLServer in the JSON Outline.](./media/app-service-deploy-complex-application-predictably/examinejson-4-sqlserver.png)
-
-Note the following about the highlighted JSON code:
-
-* The use of parameters ensures that the created resources are named and configured in a way that makes them consistent with one another.
-* The SQLServer resource has two nested resources, each has a different value for `type`.
-* The nested resources inside `“resources”: […]`, where the database and the firewall rules are defined, have a `dependsOn` element that specifies the resource ID of the root-level SQLServer resource. This tells Azure Resource Manager, “before you create this resource, that other resource must already exist; and if that other resource is defined in the template, then create that one first”.
-
- > [!NOTE]
- > For detailed information on how to use the `resourceId()` function, see [Azure Resource Manager Template Functions](../azure-resource-manager/templates/template-functions-resource.md#resourceid).
- >
- >
-* The effect of the `dependsOn` element is that Azure Resource Manager can know which resources can be created in parallel and which resources must be created sequentially.
-
-#### App Service app
-Now, letΓÇÖs move on to the actual apps themselves, which are more complicated. Click the [variables(ΓÇÿapiSiteNameΓÇÖ)] app in the JSON Outline to highlight its JSON code. YouΓÇÖll notice that things are getting much more interesting. For this purpose, IΓÇÖll talk about the features one by one:
-
-##### Root resource
-The app depends on two different resources. This means that Azure Resource Manager will create the app only after both the App Service plan and the SQL Server instance are created.
-
-![Shows the app dependencies on the App Service plan and the SQL Server instance.](./media/app-service-deploy-complex-application-predictably/examinejson-5-webapproot.png)
-
-##### App settings
-The app settings are also defined as a nested resource.
-
-![Shows the app settings defined as a nested resource in the JSON code.](./media/app-service-deploy-complex-application-predictably/examinejson-6-webappsettings.png)
-
-In the `properties` element for `config/appsettings`, you have two app settings in the format `"<name>" : "<value>"`.
-
-* `PROJECT` is a [KUDU setting](https://github.com/projectkudu/kudu/wiki/Customizing-deployments) that tells Azure deployment which project to use in a multi-project Visual Studio solution. I will show you later how source control is configured, but since the ToDoApp code is in a multi-project Visual Studio solution, we need this setting.
-* `clientUrl` is simply an app setting that the application code uses.
-
-##### Connection strings
-The connection strings are also defined as a nested resource.
-
-![Shows how the connection strings are defined as a nested resource in the JSON code.](./media/app-service-deploy-complex-application-predictably/examinejson-7-webappconnstr.png)
-
-In the `properties` element for `config/connectionstrings`, each connection string is also defined as a name:value pair, with the specific format of `"<name>" : {"value": "…", "type": "…"}`. For the `type` element, possible values are `MySql`, `SQLServer`, `SQLAzure`, and `Custom`.
-
-> [!TIP]
-> For a definitive list of the connection string types, run the following command in Azure PowerShell:
-> \[Enum]::GetNames("Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.DatabaseType")
->
->
-
-##### Source control
-The source control settings are also defined as a nested resource. Azure Resource Manager uses this resource to configure continuous publishing (see caveat on `IsManualIntegration` later) and also to kick off the deployment of application code automatically during the processing of the JSON file.
-
-![Shows how the source control settings are defined as a nested resource in the JSON code.](./media/app-service-deploy-complex-application-predictably/examinejson-8-webappsourcecontrol.png)
-
-`RepoUrl` and `branch` should be pretty intuitive and should point to the Git repository and the name of the branch to publish from. Again, these are defined by input parameters.
-
-Note in the `dependsOn` element that, in addition to the app resource itself, `sourcecontrols/web` also depends on `config/appsettings` and `config/connectionstrings`. This is because once `sourcecontrols/web` is configured, the Azure deployment process will automatically attempt to deploy, build, and start the application code. Therefore, inserting this dependency helps you make sure that the application has access to the required app settings and connection strings before the application code is run.
-
-> [!NOTE]
-> Note also that `IsManualIntegration` is set to `true`. This property is necessary in this tutorial because you do not actually own the GitHub repository, and thus cannot actually grant permission to Azure to configure continuous publishing from [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) (i.e. push automatic repository updates to Azure). You can use the default value `false` for the specified repository only if you have configured the ownerΓÇÖs GitHub credentials in the [Azure portal](https://portal.azure.com/) before. In other words, if you have set up source control to GitHub or BitBucket for any app in the [Azure Portal](https://portal.azure.com/) previously, using your user credentials, then Azure will remember the credentials and use them whenever you deploy any app from GitHub or BitBucket in the future. However, if you havenΓÇÖt done this already, deployment of the JSON template will fail when Azure Resource Manager tries to configure the appΓÇÖs source control settings because it cannot log into GitHub or BitBucket with the repository ownerΓÇÖs credentials.
->
->
-
-## Compare the JSON template with deployed resource group
-Here, you can go through all the appΓÇÖs blades in the [Azure Portal](https://portal.azure.com/), but thereΓÇÖs another tool thatΓÇÖs just as useful, if not more. Go to the [Azure Resource Explorer](https://resources.azure.com) preview tool, which gives you a JSON representation of all the resource groups in your subscriptions, as they actually exist in the Azure backend. You can also see how the resource groupΓÇÖs JSON hierarchy in Azure corresponds with the hierarchy in the template file thatΓÇÖs used to create it.
-
-For example, when I go to the [Azure Resource Explorer](https://resources.azure.com) tool and expand the nodes in the explorer, I can see the resource group and the root-level resources that are collected under their respective resource types.
-
-![View the resource group and root-level resources in the expanded Azure Resources Explorer tool.](./media/app-service-deploy-complex-application-predictably/ARM-1-treeview.png)
-
-If you drill down to an app, you should be able to see app configuration details similar to the below screenshot:
-
-![Drill down to view the configuration details in the app.](./media/app-service-deploy-complex-application-predictably/ARM-2-jsonview.png)
-
-Again, the nested resources should have a hierarchy very similar to those in your JSON template file, and you should see the app settings, connection strings, etc., properly reflected in the JSON pane. The absence of settings here may indicate an issue with your JSON file and can help you troubleshoot your JSON template file.
-
-## Deploy the resource group template yourself
-The **Deploy to Azure** button is great, but it allows you to deploy the resource group template in azuredeploy.json only if you have already pushed azuredeploy.json to GitHub. The Azure .NET SDK also provides the tools for you to deploy any JSON template file directly from your local machine. To do this, follow the steps below:
-
-1. In Visual Studio, click **File** > **New** > **Project**.
-2. Click **Visual C#** > **Cloud** > **Azure Resource Group**, then click **OK**.
-
- ![Create a new project as an Azure Resource Group in the Azure .NET SDK.](./media/app-service-deploy-complex-application-predictably/deploy-1-vsproject.png)
-3. In **Select Azure Template**, select **Blank Template** and click **OK**.
-4. Drag azuredeploy.json into the **Template** folder of your new project.
-
- ![Shows the result of dragging the azuredeploy.json file into the Template folder of your project.](./media/app-service-deploy-complex-application-predictably/deploy-2-copyjson.png)
-5. From Solution Explorer, open the copied azuredeploy.json.
-6. Just for the sake of the demonstration, letΓÇÖs add some standard Application Insight resources to our JSON file, by clicking **Add Resource**. If youΓÇÖre just interested in deploying the JSON file, skip to the deploy steps.
-
- ![Shows the Add Resource button you can use to add standard Application Insight resources to your JSON file.](./media/app-service-deploy-complex-application-predictably/deploy-3-newresource.png)
-7. Select **Application Insights for Web Apps**, then make sure an existing App Service plan and app is selected, and then click **Add**.
-
- ![Shows the selection of Application Insights for Web Apps, Name, App Service Plan, and Web App.](./media/app-service-deploy-complex-application-predictably/deploy-4-newappinsight.png)
-
- YouΓÇÖll now be able to see several new resources that, depending on the resource and what it does, have dependencies on either the App Service plan or the app. These resources are not enabled by their existing definition and youΓÇÖre going to change that.
-
- ![View the new resources that have dependencies on the App Service plan or app.](./media/app-service-deploy-complex-application-predictably/deploy-5-appinsightresources.png)
-8. In the JSON Outline, click **appInsights AutoScale** to highlight its JSON code. This is the scaling setting for your App Service plan.
-9. In the highlighted JSON code, locate the `location` and `enabled` properties and set them as shown below.
-
- ![Shows the location and enabled properties in the appInsights AutoScale JSON code and the values you should set them to.](./media/app-service-deploy-complex-application-predictably/deploy-6-autoscalesettings.png)
-10. In the JSON Outline, click **CPUHigh appInsights** to highlight its JSON code. This is an alert.
-11. Locate the `location` and `isEnabled` properties and set them as shown below. Do the same for the other three alerts (purple bulbs).
-
- ![Shows the location and isEnabled properties in the CPUHigh appInsights JSON code and the values you should set them to.](./media/app-service-deploy-complex-application-predictably/deploy-7-alerts.png)
-12. YouΓÇÖre now ready to deploy. Right-click the project and select **Deploy** > **New Deployment**.
-
- ![Shows how to deploy your new project.](./media/app-service-deploy-complex-application-predictably/deploy-8-newdeployment.png)
-13. Log into your Azure account if you havenΓÇÖt already done so.
-14. Select an existing resource group in your subscription or create a new one, select **azuredeploy.json**, and then click **Edit Parameters**.
-
- ![Shows how to edit the parameters in the azuredeploy.json file.](./media/app-service-deploy-complex-application-predictably/deploy-9-deployconfig.png)
-
- YouΓÇÖll now be able to edit all the parameters defined in the template file in a nice table. Parameters that define defaults will already have their default values, and parameters that define a list of allowed values will be shown as dropdowns.
-
- ![Shows parameters that define a list of allowed values as dropdown lists.](./media/app-service-deploy-complex-application-predictably/deploy-10-parametereditor.png)
-15. Fill in all the empty parameters, and use the [GitHub repo address for ToDoApp](https://github.com/azure-appservice-samples/ToDoApp.git) in **repoUrl**. Then, click **Save**.
-
- ![Shows the newly filled parameters for the azuredeploy.json file.](./media/app-service-deploy-complex-application-predictably/deploy-11-parametereditorfilled.png)
-
- > [!NOTE]
- > Autoscaling is a feature offered in **Standard** tier or higher, and plan-level alerts are features offered in **Basic** tier or higher, youΓÇÖll need to set the **sku** parameter to **Standard** or **Premium** in order to see all your new App Insights resources light up.
- >
- >
-16. Click **Deploy**. If you selected **Save passwords**, the password will be saved in the parameter file **in plain text**. Otherwise, youΓÇÖll be asked to input the database password during the deployment process.
-
-ThatΓÇÖs it! Now you just need to go to the [Azure Portal](https://portal.azure.com/) and the [Azure Resource Explorer](https://resources.azure.com) tool to see the new alerts and autoscale settings added to your JSON deployed application.
-
-Your steps in this section mainly accomplished the following:
-
-1. Prepared the template file
-2. Created a parameter file to go with the template file
-3. Deployed the template file with the parameter file
-
-The last step is easily done by a PowerShell cmdlet. To see what Visual Studio did when it deployed your application, open Scripts\Deploy-AzureResourceGroup.ps1. ThereΓÇÖs a lot of code there, but IΓÇÖm just going to highlight all the pertinent code you need to deploy the template file with the parameter file.
-
-![Shows the pertinent code in the script that you need use to deploy the template file with the parameter file.](./media/app-service-deploy-complex-application-predictably/deploy-12-powershellsnippet.png)
-
-The last cmdlet, `New-AzureResourceGroup`, is the one that actually performs the action. All this should demonstrate to you that, with the help of tooling, it is relatively straightforward to deploy your cloud application predictably. Every time you run the cmdlet on the same template with the same parameter file, youΓÇÖre going to get the same result.
-
-## Summary
-In DevOps, repeatability and predictability are keys to any successful deployment of a high-scale application composed of microservices. In this tutorial, you have deployed a two-microservice application to Azure as a single resource group using the Azure Resource Manager template. Hopefully, it has given you the knowledge you need in order to start converting your application in Azure into a template and can provision and deploy it predictably.
-
-<a name="resources"></a>
-
-## More resources
-* [Azure Resource Manager Template Language](../azure-resource-manager/templates/syntax.md)
-* [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md)
-* [Azure Resource Manager Template Functions](../azure-resource-manager/templates/template-functions.md)
-* [Deploy an application with Azure Resource Manager template](../azure-resource-manager/templates/deploy-powershell.md)
-* [Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md)
-* [Troubleshooting Resource Group Deployments in Azure](../azure-resource-manager/templates/common-deployment-errors.md)
-
-## Next steps
-
-To learn about the JSON syntax and properties for resource types deployed in this article, see:
-
-* [Microsoft.Sql/servers](/azure/templates/microsoft.sql/servers)
-* [Microsoft.Sql/servers/databases](/azure/templates/microsoft.sql/servers/databases)
-* [Microsoft.Sql/servers/firewallRules](/azure/templates/microsoft.sql/servers/firewallrules)
-* [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms)
-* [Microsoft.Web/sites](/azure/templates/microsoft.web/sites)
-* [Microsoft.Web/sites/slots](/azure/templates/microsoft.web/sites/slots)
-* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
app-service Deploy Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-resource-manager-template.md
- Title: Deploy apps with templates
-description: Find guidance on creating Azure Resource Manager templates to provision and deploy App Service apps.
--- Previously updated : 01/03/2019---
-# Guidance on deploying web apps by using Azure Resource Manager templates
-
-This article provides recommendations for creating Azure Resource Manager templates to deploy Azure App Service solutions. These recommendations can help you avoid common problems.
-
-## Define dependencies
-
-Defining dependencies for web apps requires an understanding of how the resources within a web app interact. If you specify dependencies in an incorrect order, you might cause deployment errors or create a race condition that stalls the deployment.
-
-> [!WARNING]
-> If you include an MSDeploy site extension in your template, you must set any configuration resources as dependent on the MSDeploy resource. Configuration changes cause the site to restart asynchronously. By making the configuration resources dependent on MSDeploy, you ensure that MSDeploy finishes before the site restarts. Without these dependencies, the site might restart during the deployment process of MSDeploy. For an example template, see [WordPress Template with Web Deploy Dependency](https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/WordpressTemplateWebDeployDependency.json).
-
-The following image shows the dependency order for various App Service resources:
-
-![Web app dependencies](media/web-sites-rm-template-guidance/web-dependencies.png)
-
-You deploy resources in the following order:
-
-**Tier 1**
-* App Service plan.
-* Any other related resources, like databases or storage accounts.
-
-**Tier 2**
-* Web app--depends on the App Service plan.
-* Azure Application Insights instance that targets the server farm--depends on the App Service plan.
-
-**Tier 3**
-* Source control--depends on the web app.
-* MSDeploy site extension--depends on the web app.
-* Azure Application Insights instance that targets the web app--depends on the web app.
-
-**Tier 4**
-* App Service certificate--depends on source control or MSDeploy if either is present. Otherwise, it depends on the web app.
-* Configuration settings (connection strings, web.config values, app settings)--depends on source control or MSDeploy if either is present. Otherwise, it depends on the web app.
-
-**Tier 5**
-* Host name bindings--depends on the certificate if present. Otherwise, it depends on a higher-level resource.
-* Site extensions--depends on configuration settings if present. Otherwise, it depends on a higher-level resource.
-
-Typically, your solution includes only some of these resources and tiers. For missing tiers, map lower resources to the next-higher tier.
-
-The following example shows part of a template. The value of the connection string configuration depends on the MSDeploy extension. The MSDeploy extension depends on the web app and database.
-
-```json
-{
- "name": "[parameters('appName')]",
- "type": "Microsoft.Web/Sites",
- ...
- "resources": [
- {
- "name": "MSDeploy",
- "type": "Extensions",
- "dependsOn": [
- "[concat('Microsoft.Web/Sites/', parameters('appName'))]",
- "[concat('Microsoft.Sql/servers/', parameters('dbServerName'), '/databases/', parameters('dbName'))]",
- ],
- ...
- },
- {
- "name": "connectionstrings",
- "type": "config",
- "dependsOn": [
- "[concat('Microsoft.Web/Sites/', parameters('appName'), '/Extensions/MSDeploy')]"
- ],
- ...
- }
- ]
-}
-```
-
-For a ready-to-run sample that uses the code above, see [Template: Build a simple Umbraco Web App](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/umbraco/umbraco-webapp-simple).
-
-## Find information about MSDeploy errors
-
-If your Resource Manager template uses MSDeploy, the deployment error messages can be difficult to understand. To get more information after a failed deployment, try the following steps:
-
-1. Go to the site's [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console).
-2. Browse to the folder at D:\home\LogFiles\SiteExtensions\MSDeploy.
-3. Look for the appManagerStatus.xml and appManagerLog.xml files. The first file logs the status. The second file logs information about the error. If the error isn't clear to you, you can include it when you're asking for help on the [forum](/answers/topics/azure-webapps.html).
-
-## Choose a unique web app name
-
-The name for your web app must be globally unique. You can use a naming convention that's likely to be unique, or you can use the [uniqueString function](../azure-resource-manager/templates/template-functions-string.md#uniquestring) to assist with generating a unique name.
-
-```json
-{
- "apiVersion": "2016-08-01",
- "name": "[concat(parameters('siteNamePrefix'), uniqueString(resourceGroup().id))]",
- "type": "Microsoft.Web/sites",
- ...
-}
-```
-
-## Deploy web app certificate from Key Vault
--
-If your template includes a [Microsoft.Web/certificates](/azure/templates/microsoft.web/certificates) resource for TLS/SSL binding, and the certificate is stored in a Key Vault, you must make sure the App Service identity can access the certificate.
-
-In global Azure, the App Service service principal has the ID of **abfa0a7c-a6b6-4736-8310-5855508787cd**. To grant access to Key Vault for the App Service service principal, use:
-
-```azurepowershell-interactive
-Set-AzKeyVaultAccessPolicy `
- -VaultName KEY_VAULT_NAME `
- -ServicePrincipalName abfa0a7c-a6b6-4736-8310-5855508787cd `
- -PermissionsToSecrets get `
- -PermissionsToCertificates get
-```
-
-In Azure Government, the App Service service principal has the ID of **6a02c803-dafd-4136-b4c3-5a6f318b4714**. Use that ID in the preceding example.
-
-In your Key Vault, select **Certificates** and **Generate/Import** to upload the certificate.
-
-![Import certificate](media/web-sites-rm-template-guidance/import-certificate.png)
-
-In your template, provide the name of the certificate for the `keyVaultSecretName`.
-
-For an example template, see [Deploy a Web App certificate from Key Vault secret and use it for creating SSL binding](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-certificate-from-key-vault).
-
-## Next steps
-
-* For a tutorial on deploying web apps with a template, see [Provision and deploy microservices predictably in Azure](deploy-complex-application-predictably.md).
-* To learn about JSON syntax and properties for resource types in templates, see [Azure Resource Manager template reference](/azure/templates/).
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
Title: "Quickstart: Deploy an ASP.NET web app"
description: Learn how to run web apps in Azure App Service by deploying your first ASP.NET app. ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Previously updated : 02/08/2022 Last updated : 05/03/2023 zone_pivot_groups: app-service-ide adobe-target: true
should be able to guide .NET devs, whether they're app is .NET Core, .NET, or .N
As a .NET developer, when choosing an IDE and .NET TFM - you map to various OS requirements. For example, if you choose Visual Studio - you're developing the app on Windows, but you can still
-target cross-platform with .NET 6.0.
+target cross-platform with .NET 7.0.
| .NET / IDE | Visual Studio | Visual Studio for Mac | Visual Studio Code | Command line | |--||--|--|-|
-| .NET 6.0 | Windows | macOS | Cross-platform | Cross-platform |
+| .NET 7.0 | Windows | macOS | Cross-platform | Cross-platform |
| .NET Framework 4.8 | Windows | N/A | Windows | Windows | --> # Quickstart: Deploy an ASP.NET web app
-In this quickstart, you'll learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you'll have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
+In this quickstart, you learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux container in App Service](quickstart-custom-container.md).
Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux
:::zone target="docs" pivot="development-environment-vs"
-### [.NET 6.0](#tab/net60)
+### [.NET 7.0](#tab/net70)
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - <a href="https://www.visualstudio.com/downloads" target="_blank">Visual Studio 2022</a> with the **ASP.NET and web development** workload.
Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux
--
-If you've already installed Visual Studio 2022:
+If you have already installed Visual Studio 2022:
1. Install the latest updates in Visual Studio by selecting **Help** > **Check for Updates**. 1. Add the workload by selecting **Tools** > **Get Tools and Features**.
If you've already installed Visual Studio 2022:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - <a href="https://www.visualstudio.com/downloads" target="_blank">Visual Studio Code</a>. - The <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack" target="_blank">Azure Tools</a> extension.-
-### [.NET 6.0](#tab/net60)
-
-<a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">
- Install the latest .NET 6.0 SDK.
-</a>
-
-### [.NET Framework 4.8](#tab/netframework48)
-
-<a href="https://dotnet.microsoft.com/download/dotnet-framework/net48" target="_blank">
- Install the .NET Framework 4.8 Developer Pack.
-</a>
-
-> [!NOTE]
-> Visual Studio Code is cross-platform code editor, however; .NET Framework is not. If you're developing .NET Framework apps with Visual Studio Code, consider using a Windows machine to satisfy the build dependencies.
--
+- <a href="https://dotnet.microsoft.com/download/dotnet/7.0" target="_blank">The latest .NET 7.0 SDK.</a>
:::zone-end
If you've already installed Visual Studio 2022:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - The <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a>.-- The .NET SDK (includes runtime and CLI).-
-### [.NET 6.0](#tab/net60)
-
-<a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">
- Install the latest .NET 6.0 SDK.
-</a>
-
-### [.NET Framework 4.8](#tab/netframework48)
-
-<a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">
- Install the latest .NET 6.0 SDK.
-</a> and <a href="https://dotnet.microsoft.com/download/dotnet-framework/net48" target="_blank">
- the .NET Framework 4.8 Developer Pack.
-</a>
-
-> [!NOTE]
-> The [.NET CLI](/dotnet/core/tools) and .NET 6.0 are both cross-platform, but .NET Framework is not. If you're developing .NET Framework apps with the .NET CLI, consider using a Windows machine to satisfy the build dependencies. .NET 6.0 is cross-platform.
--
+- <a href="https://dotnet.microsoft.com/download/dotnet/7.0" target="_blank">The latest .NET 7.0 SDK.</a>
:::zone-end
If you've already installed Visual Studio 2022:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - The <a href="/powershell/azure/install-az-ps" target="_blank">Azure PowerShell</a>.-- The .NET SDK (includes runtime and CLI).-
-### [.NET 6.0](#tab/net60)
-
-<a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">
- Install the latest .NET 6.0 SDK.
-</a>
-
-### [.NET Framework 4.8](#tab/netframework48)
-
-<a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">
- Install the latest .NET 6.0 SDK.
-</a> and <a href="https://dotnet.microsoft.com/download/dotnet-framework/net48" target="_blank">
- the .NET Framework 4.8 Developer Pack.
-</a>
-
-> [!NOTE]
-> [Azure PowerShell](/powershell/azure/) and .NET 6.0 are both cross-platform, but .NET Framework is not. If you're developing .NET Framework apps with the .NET CLI, consider using a Windows machine to satisfy the build dependencies.
--
+- <a href="https://dotnet.microsoft.com/download/dotnet/7.0" target="_blank">The latest .NET 7.0 SDK.</a>
:::zone-end :::zone target="docs" pivot="development-environment-azure-portal"
-### [.NET 6.0](#tab/net60)
- - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - A GitHub account [Create an account for free](https://github.com/).
-### [.NET Framework 4.8](#tab/netframework48)
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- A GitHub account [Create an account for free](https://github.com/).
-
:::zone-end
-## Create an ASP.NET web app
+## 1. Create an ASP.NET web app
:::zone target="docs" pivot="development-environment-vs"
-### [.NET 6.0](#tab/net60)
+### [.NET 7.0](#tab/net70)
1. Open Visual Studio and then select **Create a new project**. 1. In **Create a new project**, find, and select **ASP.NET Core Web App**, then select **Next**. 1. In **Configure your new project**, name the application _MyFirstAzureWebApp_, and then select **Next**.
- :::image type="content" source="./media/quickstart-dotnet/configure-webapp-net.png" alt-text="Screenshot of Visual Studio - Configure ASP.NET 6.0 web app." lightbox="media/quickstart-dotnet/configure-webapp-net.png" border="true":::
+ :::image type="content" source="./media/quickstart-dotnetcore/configure-web-app-project.png" alt-text="Screenshot of Visual Studio - Configure ASP.NET 7.0 web app." lightbox="media/quickstart-dotnetcore/configure-web-app-project.png" border="true":::
-1. Select **.NET 6.0 (Long-term support)**.
+1. Select **.NET 7.0 (Standard-term support)**.
1. Ensure **Authentication Type** is set to **None**. Select **Create**.
- :::image type="content" source="media/quickstart-dotnet/vs-additional-info-net60.png" alt-text="Screenshot of Visual Studio - Additional info when selecting .NET 6.0." lightbox="media/quickstart-dotnet/vs-additional-info-net60.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnetcore/vs-additional-info-net-70.png" alt-text="Screenshot of Visual Studio - Additional info when selecting .NET 7.0." lightbox="media/quickstart-dotnetcore/vs-additional-info-net-70.png" border="true":::
-1. From the Visual Studio menu, select **Debug** > **Start Without Debugging** to run the web app locally.
+1. From the Visual Studio menu, select **Debug** > **Start Without Debugging** to run the web app locally. If you see a message asking you to trust a self-signed certificate, select **Yes**.
- :::image type="content" source="media/quickstart-dotnet/local-webapp-net.png" alt-text="Screenshot of Visual Studio - ASP.NET Core 6.0 running locally." lightbox="media/quickstart-dotnet/local-webapp-net.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnetcore/local-web-app-net.png" alt-text="Screenshot of Visual Studio - ASP.NET Core 7.0 running locally." lightbox="media/quickstart-dotnetcore/local-web-app-net.png" border="true":::
### [.NET Framework 4.8](#tab/netframework48)
If you've already installed Visual Studio 2022:
1. In **Create a new project**, find, and select **ASP.NET Web Application (.NET Framework)**, then select **Next**. 1. In **Configure your new project**, name the application _MyFirstAzureWebApp_, and then select **Create**.
- :::image type="content" source="media/quickstart-dotnet/configure-webapp-netframework48.png" alt-text="Screenshot of Visual Studio - Configure ASP.NET Framework 4.8 web app." lightbox="media/quickstart-dotnet/configure-webapp-netframework48.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnet/configure-web-app-net-framework-48.png" alt-text="Screenshot of Visual Studio - Configure ASP.NET Framework 4.8 web app." lightbox="media/quickstart-dotnet/configure-web-app-net-framework-48.png" border="true":::
1. Select the **MVC** template. 1. Ensure **Authentication** is set to **No Authentication**. Select **Create**.
- :::image type="content" source="media/quickstart-dotnet/vs-mvc-no-auth-netframework48.png" alt-text="Screenshot of Visual Studio - Select the MVC template." lightbox="media/quickstart-dotnet/vs-mvc-no-auth-netframework48.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnetcore/vs-mvc-no-auth-net-framework-48.png" alt-text="Screenshot of Visual Studio - Select the MVC template." lightbox="media/quickstart-dotnetcore/vs-mvc-no-auth-net-framework-48.png" border="true":::
1. From the Visual Studio menu, select **Debug** > **Start Without Debugging** to run the web app locally.
- :::image type="content" source="media/quickstart-dotnet/vs-local-webapp-netframework48.png" alt-text="Screenshot of Visual Studio - ASP.NET Framework 4.8 running locally." lightbox="media/quickstart-dotnet/vs-local-webapp-netframework48.png" border="true":::
-----
-1. In the terminal window, create a new folder named _MyFirstAzureWebApp_, and open it in Visual Studio Code.
-
- ```terminal
- mkdir MyFirstAzureWebApp
- code MyFirstAzureWebApp
- ```
-
-1. In Visual Studio Code, open the <a href="https://code.visualstudio.com/docs/editor/integrated-terminal" target="_blank">Terminal</a> window by typing `Ctrl` + `` ` ``.
-
-1. In Visual Studio Code terminal, create a new .NET web app using the [`dotnet new webapp`](/dotnet/core/tools/dotnet-new#web-options) command.
-
- ### [.NET 6.0](#tab/net60)
-
- ```dotnetcli
- dotnet new webapp -f net6.0
- ```
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
- ```dotnetcli
- dotnet new webapp --target-framework-override net48
- ```
-
- > [!IMPORTANT]
- > The `--target-framework-override` flag is a free-form text replacement of the target framework moniker (TFM) for the project, and makes *no guarantees* that the supporting template exists or compiles. You can only build and run .NET Framework apps on Windows.
-
-
-
-1. From the **Terminal** in Visual Studio Code, run the application locally using the [`dotnet run`](/dotnet/core/tools/dotnet-run) command.
-
- ```dotnetcli
- dotnet run --urls=https://localhost:5001/
- ```
-
-1. Open a web browser, and navigate to the app at `https://localhost:5001`.
+ :::image type="content" source="media/quickstart-dotnetcore/vs-local-web-app-net-framework-48.png" alt-text="Screenshot of Visual Studio - ASP.NET Framework 4.8 running locally." lightbox="media/quickstart-dotnetcore/vs-local-web-app-net-framework-48.png" border="true":::
- ### [.NET 6.0](#tab/net60)
-
- You'll see the template ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/local-webapp-net.png" alt-text="Screenshot of Visual Studio Code - run .NET 6.0 in browser locally." lightbox="media/quickstart-dotnet/local-webapp-net.png" border="true":::
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the template ASP.NET Framework 4.8 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/local-webapp-net48.png" alt-text="Screenshot of Visual Studio Code - run .NET 4.8 in browser locally." lightbox="media/quickstart-dotnet/local-webapp-net48.png" border="true":::
-
-
+--
:::zone-end <!-- markdownlint-disable MD044 --> <!-- markdownlint-enable MD044 --> 1. Open a terminal window on your machine to a working directory. Create a new .NET web app using the [`dotnet new webapp`](/dotnet/core/tools/dotnet-new#web-options) command, and then change directories into the newly created app. <!-- Please keep the following commands in two lines instead of one && separated line. The latter doesn't work in PowerShell -->
- ### [.NET 6.0](#tab/net60)
-
- ```dotnetcli
- dotnet new webapp -n MyFirstAzureWebApp --framework net6.0
- cd MyFirstAzureWebApp
- ```
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
```dotnetcli
- dotnet new webapp -n MyFirstAzureWebApp --target-framework-override net48
+ dotnet new webapp -n MyFirstAzureWebApp --framework net7.0
cd MyFirstAzureWebApp ```
- > [!IMPORTANT]
- > The `--target-framework-override` flag is a free-form text replacement of the target framework moniker (TFM) for the project, and makes *no guarantees* that the supporting template exists or compiles. You can only build .NET Framework apps on Windows.
-
-
- 1. From the same terminal session, run the application locally using the [`dotnet run`](/dotnet/core/tools/dotnet-run) command. ```dotnetcli
If you've already installed Visual Studio 2022:
1. Open a web browser, and navigate to the app at `https://localhost:5001`.
- ### [.NET 6.0](#tab/net60)
-
- You'll see the template ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/local-webapp-net.png" alt-text="Screenshot of Visual Studio Code - ASP.NET Core 6.0 in local browser." lightbox="media/quickstart-dotnet/local-webapp-net.png" border="true":::
+ You see the template ASP.NET Core 7.0 web app displayed in the page.
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the template ASP.NET Framework 4.8 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/local-webapp-net48.png" alt-text="Screenshot of Visual Studio Code - ASP.NET Framework 4.8 in local browser." lightbox="media/quickstart-dotnet/local-webapp-net48.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnetcore/local-web-app-net.png" alt-text="Screenshot of Visual Studio Code - ASP.NET Core 7.0 in local browser." lightbox="media/quickstart-dotnetcore/local-web-app-net.png" border="true":::
-
- :::zone-end :::zone target="docs" pivot="development-environment-azure-portal"
-In this step we will fork a demo project to deploy.
+In this step, you fork a demo project to deploy.
-### [.NET 6.0](#tab/net60)
+### [.NET 7.0](#tab/net70)
-- Go to the [.NET 6.0 sample app](https://github.com/Azure-Samples/dotnetcore-docs-hello-world).-- Select the **Fork** button in the upper right on the GitHub page.-- Select the **Owner** and leave the default **Repository name**.-- Select **Create fork**.
+1. Go to the [.NET 7.0 sample app](https://github.com/Azure-Samples/dotnetcore-docs-hello-world).
+1. Select the **Fork** button in the upper right on the GitHub page.
+1. Select the **Owner** and leave the default **Repository name**.
+1. Select **Create fork**.
### [.NET Framework 4.8](#tab/netframework48) -- Go to the [.NET Framework 4.8 sample app](https://github.com/Azure-Samples/app-service-web-dotnet-get-started).-- Select the **Fork** button in the upper right on the GitHub page.-- Select the **Owner** and leave the default **Repository name**.-- Select **Create fork**.
+1. Go to the [.NET Framework 4.8 sample app](https://github.com/Azure-Samples/app-service-web-dotnet-get-started).
+1. Select the **Fork** button in the upper right on the GitHub page.
+1. Select the **Owner** and leave the default **Repository name**.
+1. Select **Create fork**.
+
+--
:::zone-end
-## Publish your web app
+## 2. Publish your web app
To publish your web app, you must first create and configure a new App Service that you can publish your app to.
-As part of setting up the App Service, you'll create:
+As part of setting up the App Service, you create:
- A new [resource group](../azure-resource-manager/management/overview.md#terminology) to contain all of the Azure resources for the service. - A new [Hosting Plan](overview-hosting-plans.md) that specifies the location, size, and features of the web server farm that hosts your app.
Follow these steps to create your App Service resources and publish your project
1. In **Solution Explorer**, right-click the **MyFirstAzureWebApp** project and select **Publish**. 1. In **Publish**, select **Azure** and then **Next**.
- :::image type="content" source="media/quickstart-dotnet/vs-publish-target-Azure.png" alt-text="Screenshot of Visual Studio - Publish the web app and target Azure." lightbox="media/quickstart-dotnet/vs-publish-target-Azure.png" border="true":::
+ :::image type="content" source="media/quickstart-dotnetcore/vs-publish-target-azure.png" alt-text="Screenshot of Visual Studio - Publish the web app and target Azure." lightbox="media/quickstart-dotnetcore/vs-publish-target-azure.png" border="true":::
1. Choose the **Specific target**, either **Azure App Service (Linux)** or **Azure App Service (Windows)**. Then, select **Next**.
Follow these steps to create your App Service resources and publish your project
1. Your options depend on whether you're signed in to Azure already and whether you have a Visual Studio account linked to an Azure account. Select either **Add an account** or **Sign in** to sign in to your Azure subscription. If you're already signed in, select the account you want.
- :::image type="content" source="media/quickstart-dotnet/sign-in-azure.png" border="true" alt-text="Screenshot of Visual Studio - Select sign in to Azure dialog." lightbox="media/quickstart-dotnet/sign-in-azure.png" :::
+ :::image type="content" source="media/quickstart-dotnetcore/sign-in-azure.png" border="true" alt-text="Screenshot of Visual Studio - Select sign in to Azure dialog." lightbox="media/quickstart-dotnetcore/sign-in-azure.png" :::
1. To the right of **App Service instances**, select **+**.
- :::image type="content" source="media/quickstart-dotnet/publish-new-app-service.png" border="true" alt-text="Screenshot of Visual Studio - New App Service app dialog." lightbox="media/quickstart-dotnet/publish-new-app-service.png" :::
+ :::image type="content" source="media/quickstart-dotnetcore/publish-new-app-service.png" border="true" alt-text="Screenshot of Visual Studio - New App Service app dialog." lightbox="media/quickstart-dotnetcore/publish-new-app-service.png" :::
1. For **Subscription**, accept the subscription that is listed or select a new one from the drop-down list. 1. For **Resource group**, select **New**. In **New resource group name**, enter *myResourceGroup* and select **OK**.
Follow these steps to create your App Service resources and publish your project
| **Location** | *West Europe* | The datacenter where the web app is hosted. | | **Size** | *Free* | [Pricing tier][app-service-pricing-tier] determines hosting features. |
- :::image type="content" source="media/quickstart-dotnet/create-new-hosting-plan.png" border="true" alt-text="Screenshot of Create new Hosting Plan screen in the Azure portal." lightbox="media/quickstart-dotnet/create-new-hosting-plan.png" :::
- 1. In **Name**, enter a unique app name that includes only the valid characters are `a-z`, `A-Z`, `0-9`, and `-`. You can accept the automatically generated unique name. The URL of the web app is `http://<app-name>.azurewebsites.net`, where `<app-name>` is your app name. 1. Select **Create** to create the Azure resources.
- :::image type="content" source="media/quickstart-dotnet/web-app-name.png" border="true" alt-text="Screenshot of Visual Studio - Create app resources dialog." lightbox="media/quickstart-dotnet/web-app-name.png" :::
+ :::image type="content" source="media/quickstart-dotnetcore/web-app-name.png" border="true" alt-text="Screenshot of Visual Studio - Create app resources dialog." lightbox="media/quickstart-dotnetcore/web-app-name.png" :::
+
+ Once the wizard completes, the Azure resources are created for you, and you're ready to publish your ASP.NET Core project.
- Once the wizard completes, the Azure resources are created for you and you're ready to publish your ASP.NET Core project.
+1. In the **Publish** dialog, ensure your new App Service app is selected, then select **Finish**, then select **Close**. Visual Studio creates a publish profile for you for the selected App Service app.
-1. In the **Publish** dialog, ensure your new App Service app is selected in **App Service instance**, then select **Finish**. Visual Studio creates a publish profile for you for the selected App Service app.
1. In the **Publish** page, select **Publish**. If you see a warning message, select **Continue**. Visual Studio builds, packages, and publishes the app to Azure, and then launches the app in the default browser.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
- You'll see the ASP.NET Core 6.0 web app displayed in the page.
+ You see the ASP.NET Core 7.0 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of Visual Studio - ASP.NET Core 6.0 web app in Azure." :::
+ :::image type="content" source="media/quickstart-dotnetcore/azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/azure-web-app-net.png" border="true" alt-text="Screenshot of Visual Studio - ASP.NET Core 7.0 web app in Azure." :::
### [.NET Framework 4.8](#tab/netframework48)
- You'll see the ASP.NET Framework 4.8 web app displayed in the page.
+ You see the ASP.NET Framework 4.8 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/vs-Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/vs-Azure-webapp-net48.png" border="true" alt-text="Screenshot of Visual Studio - ASP.NET Framework 4.8 web app in Azure.":::
+ :::image type="content" source="media/quickstart-dotnetcore/vs-azure-web-app-net-48.png" lightbox="media/quickstart-dotnetcore/vs-azure-web-app-net-48.png" border="true" alt-text="Screenshot of Visual Studio - ASP.NET Framework 4.8 web app in Azure.":::
-
+ --
:::zone-end
Follow these steps to create your App Service resources and publish your project
<!-- :::image type="content" source="media/quickstart-dotnet/vscode-sign-in-to-Azure.png" alt-text="Screenshot of Visual Studio Code - Sign in to Azure." border="true"::: -->
-1. In Visual Studio Code, open the [**Command Palette**](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette), <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>P</kbd>.
-1. Search for and select "Azure App Service: Deploy to Web App".
+1. In Visual Studio Code, open the [**Command Palette**](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) by selecting **View** > **Command Palette**.
+1. Search for and select "Azure App Service: Create New Web App (Advanced)".
1. Respond to the prompts as follows:
- 1. Select *MyFirstAzureWebApp* as the folder to deploy.
- 1. Select **Add Config** when prompted.
1. If prompted, sign in to your Azure account. 1. Select your **Subscription**. 1. Select **Create new Web App... Advanced**. 1. For **Enter a globally unique name**, use a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier. 1. Select **Create new resource group** and provide a name like `myResourceGroup`.
- 1. When prompted to **Select a runtime stack**:
- - For *.NET 6.0*, select **.NET 6**
- - For *.NET Framework 4.8*, select **ASP.NET V4.8**
+ 1. When prompted to **Select a runtime stack**, select **.NET 7 (STS)**.
1. Select an operating system (Windows or Linux).
- - For *.NET Framework 4.8*, Windows will be selected implicitly.
1. Select a location near you. 1. Select **Create a new App Service plan**, provide a name, and select the **F1 Free** [pricing tier][app-service-pricing-tier]. 1. Select **Skip for now** for the Application Insights resource.
+ 1. When prompted, click **Deploy**.
+ 1. Select *MyFirstAzureWebApp* as the folder to deploy.
+ 1. Select **Add Config** when prompted.
1. In the popup **Always deploy the workspace "MyFirstAzureWebApp" to \<app-name>"**, select **Yes** so that Visual Studio Code deploys to the same App Service app every time you're in that workspace. 1. When publishing completes, select **Browse Website** in the notification and select **Open** when prompted.
- ### [.NET 6.0](#tab/net60)
-
- You'll see the ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of Visual Studio Code - ASP.NET Core 6.0 web app in Azure.":::
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the ASP.NET Framework 4.8 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/vs-Azure-webapp-net48.png" border="true" alt-text="Screenshot of Visual Studio Code - ASP.NET Framework 4.8 web app in Azure.":::
+ You see the ASP.NET Core 7.0 web app displayed in the page.
-
+ :::image type="content" source="media/quickstart-dotnetcore/azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/azure-web-app-net.png" border="true" alt-text="Screenshot of Visual Studio Code - ASP.NET Core 7.0 web app in Azure.":::
:::zone-end
Follow these steps to create your App Service resources and publish your project
- If the `az` command isn't recognized, ensure you have the Azure CLI installed as described in [Prerequisites](#prerequisites). - Replace `<app-name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier. - The `--sku F1` argument creates the web app on the **Free** [pricing tier][app-service-pricing-tier]. Omit this argument to use a faster premium tier, which incurs an hourly cost.
- - Replace `<os>` with either `linux` or `windows`. You must use `windows` when targeting *ASP.NET Framework 4.8*.
+ - Replace `<os>` with either `linux` or `windows`.
- You can optionally include the argument `--location <location-name>` where `<location-name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command.
- The command might take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and hosting app, configuring logging, then performing ZIP deployment. Then it shows a message with the app's URL:
+ The command might take a few minutes to complete. While it's running, the command provides messages about creating the resource group, the App Service plan, and hosting app, configuring logging, then performing ZIP deployment. Then it shows a message with the app's URL:
```azurecli You can launch the app at http://<app-name>.azurewebsites.net
Follow these steps to create your App Service resources and publish your project
1. Open a web browser and navigate to the URL:
- ### [.NET 6.0](#tab/net60)
-
- You'll see the ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Core 6.0 web app in Azure.":::
-
- ### [.NET Framework 4.8](#tab/netframework48)
+ You see the ASP.NET Core 7.0 web app displayed in the page.
- You'll see the ASP.NET Framework 4.8 web app displayed in the page.
+ :::image type="content" source="media/quickstart-dotnetcore/azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/azure-web-app-net.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Core 7.0 web app in Azure.":::
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/Azure-webapp-net48.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Framework 4.8 web app in Azure.":::
-
- --
- :::zone-end <!-- markdownlint-disable MD044 -->
Follow these steps to create your App Service resources and publish your project
- Replace `<app-name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A combination of your company name and an app identifier is a good pattern. - You can optionally include the parameter `-Location <location-name>` where `<location-name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`Get-AzLocation`](/powershell/module/az.resources/get-azlocation) command.
- The command might take a few minutes to complete. While running, it creates a resource group, an App Service plan, and the App Service resource.
+ The command might take a few minutes to complete. While it's running, the command creates a resource group, an App Service plan, and the App Service resource.
<!-- ### [Deploy to Linux](#tab/linux)
Follow these steps to create your App Service resources and publish your project
1. Change to the release directory and create a zip file from the contents:
- ### [.NET 6.0](#tab/net60)
-
- ```powershell-interactive
- cd bin\Release\net6.0\publish
- Compress-Archive -Path * -DestinationPath deploy.zip
- ```
-
- ### [.NET Framework 4.8](#tab/netframework48)
- ```powershell-interactive
- cd bin\Release\net48\publish
+ cd bin\Release\net7.0\publish
Compress-Archive -Path * -DestinationPath deploy.zip ```
- --
- 1. Publish the zip file to the Azure app using the [Publish-AzWebApp](/powershell/module/az.websites/publish-azwebapp) command: ```azurepowershell-interactive
Follow these steps to create your App Service resources and publish your project
1. Open a web browser and navigate to the URL:
- ### [.NET 6.0](#tab/net60)
-
- You'll see the ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Core 6.0 web app in Azure.":::
+ You see the ASP.NET Core 7.0 web app displayed in the page.
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the ASP.NET Framework 4.8 web app displayed in the page.
+ :::image type="content" source="media/quickstart-dotnetcore/azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/azure-web-app-net.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Core 7.0 web app in Azure.":::
- :::image type="content" source="media/quickstart-dotnet/Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/Azure-webapp-net48.png" border="true" alt-text="Screenshot of the CLI - ASP.NET Framework 4.8 web app in Azure.":::
-
- --
- :::zone-end :::zone target="docs" pivot="development-environment-azure-portal"
Follow these steps to create your App Service resources and publish your project
1. In the **App Services** page, select **+ Create**.
-1. In the **Basics** tab, under **Project details**, ensure the correct subscription is selected and then select to **Create new** resource group. Type *myResourceGroup* for the name.
-
- :::image type="content" source="./media/quickstart-dotnet/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the web app.":::
+1. In the **Basics** tab:
-1. Under **Instance details**:
-
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
+ - Under **Resource group**, select **Create new**. Type *myResourceGroup* for the name.
- Under **Name**, type a globally unique name for your web app. - Under **Publish**, select *Code*.
- - Under **Runtime stack** select *.NET 6 (LTS)*.
+ - Under **Runtime stack** select *.NET 7 (STS)*.
- Select an **Operating System**, and a **Region** you want to serve your app from.
+ - Under **App Service Plan**, select **Create new** and type *myAppServicePlan* for the name.
+ - Under **Pricing plan**, select **Free F1**.
+
+ :::image type="content" source="./media/quickstart-dotnetcore/app-service-details-net-70.png" lightbox="./media/quickstart-dotnetcore/app-service-details-net-70.png" alt-text="Screenshot of new App Service app configuration for .NET 7 in the Azure portal.":::
- :::image type="content" source="media/quickstart-dotnet/app-service-dotnet-60.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the App Service Instance Details with a .NET 6 runtime.":::
-
### [.NET Framework 4.8](#tab/netframework48)
+ - Under **Resource group**, select **Create new**. Type *myResourceGroup* for the name.
- Under **Name**, type a globally unique name for your web app. - Under **Publish**, select *Code*. - Under **Runtime stack** select *ASP.NET V4.8*. - Select an **Operating System**, and a **Region** you want to serve your app from.-
-
- :::image type="content" source="media/quickstart-dotnet/app-service-dotnet-48.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the App Service Instance Details with a ASP.NET V4.8 runtime.":::
+ - Under **App Service Plan**, select **Create new** and type *myAppServicePlan* for the name.
+ - Under **Pricing plan**, select **Free F1**.
- --
+ :::image type="content" source="./media/quickstart-dotnetcore/app-service-details-net-48.png" lightbox="./media/quickstart-dotnetcore/app-service-details-net-48.png" alt-text="Screenshot of new App Service app configuration for .NET Framework V4.8 in the Azure portal.":::
-1. Under **App Service Plan**, select **Create new** App Service Plan. Type *myAppServicePlan* for the name. To change to the Free tier, select **Change size**, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page.
-
- :::image type="content" source="./media/quickstart-dotnet/app-service-plan-details.png" alt-text="Screenshot of the Administrator account section where you provide the administrator username and password.":::
+ --
1. Select the **Next: Deployment >** button at the bottom of the page.
Follow these steps to create your App Service resources and publish your project
1. Under **GitHub Actions details**, authenticate with your GitHub account, and select the following options:
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
- For **Organization** select the organization where you have forked the demo project. - For **Repository** select the *dotnetcore-docs-hello-world* project. - For **Branch** select *master*.
- :::image type="content" source="media/quickstart-dotnet/app-service-deploy-60.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the deployment options for an app using the .NET 6 runtime.":::
+ :::image type="content" source="media/quickstart-dotnet/app-service-deploy-60.png" lightbox="media/quickstart-dotnet/app-service-deploy-60.png" border="true" alt-text="Screenshot of the deployment options for an app using the .NET 6 runtime.":::
### [.NET Framework 4.8](#tab/netframework48)
Follow these steps to create your App Service resources and publish your project
- For **Repository** select the *app-service-web-dotnet-get-started* project. - For **Branch** select *master*.
- :::image type="content" source="media/quickstart-dotnet/app-service-deploy-48.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the deployment options for an app using the .NET Framework 4.8 runtime.":::
+ :::image type="content" source="media/quickstart-dotnet/app-service-deploy-48.png" lightbox="media/quickstart-dotnet/app-service-deploy-48.png" border="true" alt-text="Screenshot of the deployment options for an app using the .NET Framework 4.8 runtime.":::
-- 1. Select the **Review + create** button at the bottom of the page.
- :::image type="content" source="./media/quickstart-dotnet/review-create.png" alt-text="Screenshot of the Review and create button at the bottom of the page.":::
- 1. After validation runs, select the **Create** button at the bottom of the page. 1. After deployment is complete, select **Go to resource**.
Follow these steps to create your App Service resources and publish your project
1. Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
- :::image type="content" source="media/quickstart-dotnet/browse-dotnet-60.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the deployed sample app.":::
+ :::image type="content" source="media/quickstart-dotnetcore/browse-dotnet-70.png" lightbox="media/quickstart-dotnetcore/browse-dotnet-70.png" border="true" alt-text="Screenshot of the deployed .NET 7.0 sample app.":::
### [.NET Framework 4.8](#tab/netframework48)
- :::image type="content" source="media/quickstart-dotnet/browse-dotnet-48.png" lightbox="media/quickstart-dotnet/Azure-webapp-net.png" border="true" alt-text="Screenshot of the deployed sample app.":::
+ :::image type="content" source="media/quickstart-dotnet/browse-dotnet-48.png" lightbox="media/quickstart-dotnet/browse-dotnet-48.png" border="true" alt-text="Screenshot of the deployed .NET Framework 4.8 sample app.":::
-- :::zone-end --
-## Update the app and redeploy
+## 3. Update the app and redeploy
Follow these steps to update and redeploy your web app:
Follow these steps to update and redeploy your web app:
When publishing completes, Visual Studio launches a browser to the URL of the web app.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
- You'll see the updated ASP.NET Core 6.0 web app displayed in the page.
+ You see the updated ASP.NET Core 7.0 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net.png" border="true" alt-text="Screenshot of Visual Studio - Updated ASP.NET Core 6.0 web app in Azure.":::
+ :::image type="content" source="media/quickstart-dotnetcore/updated-azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/updated-azure-web-app-net.png" border="true" alt-text="Screenshot of Visual Studio - Updated ASP.NET Core 7.0 web app in Azure.":::
### [.NET Framework 4.8](#tab/netframework48)
- You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
+ You see the updated ASP.NET Framework 4.8 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/vs-updated-Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/vs-updated-Azure-webapp-net48.png" border="true" alt-text="Screenshot of Visual Studio - Updated ASP.NET Framework 4.8 web app in Azure.":::
+ :::image type="content" source="media/quickstart-dotnetcore/vs-updated-azure-web-app-net-48.png" lightbox="media/quickstart-dotnetcore/vs-updated-azure-web-app-net-48.png" border="true" alt-text="Screenshot of Visual Studio - Updated ASP.NET Framework 4.8 web app in Azure.":::
-
+ --
:::zone-end
Follow these steps to update and redeploy your web app:
1. Select **Deploy** when prompted. 1. When publishing completes, select **Browse Website** in the notification and select **Open** when prompted.
- ### [.NET 6.0](#tab/net60)
-
- You'll see the updated ASP.NET Core 6.0 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net.png" border="true" alt-text="Screenshot of Visual Studio Code - Updated ASP.NET Core 6.0 web app in Azure.":::
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net48.png" border="true" alt-text="Screenshot of Visual Studio Code - Updated ASP.NET Framework 4.8 web app in Azure.":::
+ You see the updated ASP.NET Core 7.0 web app displayed in the page.
-
+ :::image type="content" source="media/quickstart-dotnetcore/updated-azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/updated-azure-web-app-net.png" border="true" alt-text="Screenshot of Visual Studio Code - Updated ASP.NET Core 7.0 web app in Azure.":::
:::zone-end
In the local directory, open the *Index.cshtml* file. Replace the first `<div>`
Save your changes, then redeploy the app using the `az webapp up` command again:
-### [.NET 6.0](#tab/net60)
-
-ASP.NET Core 6.0 is cross-platform, based on your previous deployment replace `<os>` with either `linux` or `windows`.
+ASP.NET Core 7.0 is cross-platform, based on your previous deployment replace `<os>` with either `linux` or `windows`.
```azurecli az webapp up --os-type <os> ```
-### [.NET Framework 4.8](#tab/netframework48)
-
-ASP.NET Framework 4.8 has framework dependencies, and must be hosted on Windows.
-
-```azurecli
-az webapp up --os-type windows
-```
-
-> [!TIP]
-> If you're interested in hosting your .NET apps on Linux, consider migrating from [ASP.NET Framework to ASP.NET Core](/aspnet/core/migration/proper-to-2x).
--- This command uses values that are cached locally in the *.azure/config* file, including the app name, resource group, and App Service plan. Once deployment has completed, switch back to the browser window that opened in the **Browse to the app** step, and hit refresh.
-### [.NET 6.0](#tab/net60)
-
-You'll see the updated ASP.NET Core 6.0 web app displayed in the page.
--
-### [.NET Framework 4.8](#tab/netframework48)
-
-You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
+You see the updated ASP.NET Core 7.0 web app displayed in the page.
-- :::zone-end
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
1. Change to the release directory and create a zip file from the contents:
- ### [.NET 6.0](#tab/net60)
-
- ```powershell-interactive
- cd bin\Release\net6.0\publish
- Compress-Archive -Path * -DestinationPath deploy.zip
- ```
-
- ### [.NET Framework 4.8](#tab/netframework48)
- ```powershell-interactive
- cd bin\Release\net48\publish
+ cd bin\Release\net7.0\publish
Compress-Archive -Path * -DestinationPath deploy.zip ```
- --
- 1. Publish the zip file to the Azure app using the [Publish-AzWebApp](/powershell/module/az.websites/publish-azwebapp) command: ```azurepowershell-interactive
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
1. Once deployment has completed, switch back to the browser window that opened in the **Browse to the app** step, and hit refresh.
- ### [.NET 6.0](#tab/net60)
-
- You'll see the updated ASP.NET Core 6.0 web app displayed in the page.
+ You see the updated ASP.NET Core 7.0 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Core 6.0 web app in Azure.":::
-
- ### [.NET Framework 4.8](#tab/netframework48)
-
- You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
-
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net48.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Framework 4.8 web app in Azure.":::
-
-
+ :::image type="content" source="media/quickstart-dotnetcore/updated-azure-web-app-net.png" lightbox="media/quickstart-dotnetcore/updated-azure-web-app-net.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Core 7.0 web app in Azure.":::
:::zone-end
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
1. Browse to your GitHub fork of the sample code.
-1. On your repo page, press `.` to start Visual Studio code within your browser.
+1. On your repo page, press `.` to start Visual Studio Code within your browser.
> [!NOTE] > The URL will change from GitHub.com to GitHub.dev. This feature only works with repos that have files. This does not work on empty repos.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
![Screenshot of forked dotnetcore-docs-hello-world GitHub repo with an annotation to Press the period key.](media/quickstart-dotnetcore/github-forked-dotnetcore-docs-hello-world-repo-press-period.png)
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
![Screenshot of forked app-service-web-dotnet-get-started GitHub repo with an annotation to Press the period key.](media/quickstart-dotnetcore/github-forked-app-service-web-dotnet-get-started-repo-press-period.png)
+ --
+ 1. Open *Index.cshtml*.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
Index.cshtml is located in the `Pages` folder.
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
![Screenshot of the Explorer window from Visual Studio Code in the browser, highlighting the Index.cshtml in the app-service-web-dotnet-get-started repo.](media/quickstart-dotnetcore/index-cshtml-in-explorer-dotnet-framework.png)
+ --
+ 2. Replace the first `<div>` element with the following code: ```razor
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
3. From the **Source Control** menu, select the **Stage Changes** button to stage the change.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
![Screenshot of Visual Studio Code in the browser, highlighting the Source Control navigation in the sidebar, then highlighting the Stage Changes button in the Source Control panel.](media/quickstart-dotnetcore/visual-studio-code-in-browser-stage-changes-dotnetcore.png)
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
![Screenshot of Visual Studio Code in the browser, highlighting the Source Control navigation in the sidebar, then highlighting the Stage Changes button in the Source Control panel.](media/quickstart-dotnetcore/visual-studio-code-in-browser-stage-changes-dotnet-framework.png)
+ --
+ 4. Enter a commit message such as `We love Azure`. Then, select **Commit and Push**.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
![Screenshot of Visual Studio Code in the browser, Source Control panel with a commit message of 'We love Azure' and the Commit and Push button highlighted.](media/quickstart-dotnetcore/visual-studio-code-in-browser-commit-push-dotnetcore.png)
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
![Screenshot of Visual Studio Code in the browser, Source Control panel with a commit message of 'We love Azure' and the Commit and Push button highlighted.](media/quickstart-dotnetcore/visual-studio-code-in-browser-commit-push-dotnet-framework.png)
+ --
+ 5. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
- ### [.NET 6.0](#tab/net60)
+ ### [.NET 7.0](#tab/net70)
- You'll see the updated ASP.NET Core 6.0 web app displayed in the page.
+ You see the updated ASP.NET Core 7.0 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Core 6.0 web app in Azure.":::
+ :::image type="content" source="media/quickstart-dotnetcore/portal-updated-dotnet-7.png" lightbox="media/quickstart-dotnetcore/portal-updated-dotnet-7.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Core 7.0 web app in Azure.":::
### [.NET Framework 4.8](#tab/netframework48)
- You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
+ You see the updated ASP.NET Framework 4.8 web app displayed in the page.
- :::image type="content" source="media/quickstart-dotnet/updated-Azure-webapp-net48.png" lightbox="media/quickstart-dotnet/updated-Azure-webapp-net48.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Framework 4.8 web app in Azure.":::
+ :::image type="content" source="media/quickstart-dotnet/updated-azure-webapp-net-48.png" lightbox="media/quickstart-dotnet/updated-azure-webapp-net-48.png" border="true" alt-text="Screenshot of the CLI - Updated ASP.NET Framework 4.8 web app in Azure.":::
+
+ --
:::zone-end
-## Manage the Azure app
+## 4. Manage the Azure app
To manage your web app, go to the [Azure portal](https://portal.azure.com), and search for and select **App Services**.
The **Overview** page for your web app, contains options for basic management li
## Next steps
-In this quickstart, you created and deployed an ASP.NET web app to Azure App Service.
-
-### [.NET 6.0](#tab/net60)
+### [.NET 7.0](#tab/net70)
Advance to the next article to learn how to create a .NET Core app and connect it to a SQL Database:
Advance to the next article to learn how to create a .NET Framework app and conn
> [!div class="nextstepaction"] > [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) -
+--
[app-service-pricing-tier]: https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio
app-service Quickstart Html Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html-uiex.md
- Title: 'QuickStart: Create a static HTML web app'
-description: Deploy your first HTML Hello World to Azure App Service in minutes. You deploy using Git, which is one of many ways to deploy to App Service.
-- Previously updated : 08/23/2019-----
-# Create a static HTML web app in Azure
-
-This quickstart shows how to deploy a basic HTML+CSS site to <abbr title="An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.">Azure App Service</abbr>. You'll complete this quickstart in [Cloud Shell](../cloud-shell/overview.md), but you can also run these commands locally with [Azure CLI](/cli/azure/install-azure-cli).
--
-## 1. Prepare your environment
--
-In [Cloud Shell](../cloud-shell/overview.md), create a quickstart directory and then change to it.
-
-```bash
-mkdir quickstart
-
-cd $HOME/quickstart
-```
-
-Next, run the following command to clone the sample app repository to your quickstart directory.
-
-```bash
-git clone https://github.com/Azure-Samples/html-docs-hello-world.git
-```
-<hr/>
-
-## 2. Create a web app
-
-Change to the directory that contains the sample code and run the [az webapp up](/cli/azure/webapp#az-webapp-up) command. **Replace** `<app-name>` with a globally unique name.
-
-```azurecli
-cd html-docs-hello-world
-
-az webapp up --location westeurope --name <app_name> --html
-```
-
-<details>
-<summary>Troubleshooting</summary>
-<ul>
-<li>If the <code>az</code> command isn't recognized, be sure you have the Azure CLI installed as described in <a href="#1-prepare-your-environment">Prepare your environment</a>.</li>
-<li>Replace <code>&lt;app-name&gt;</code> with a name that's unique across all of Azure (<em>valid characters are <code>a-z</code>, <code>0-9</code>, and <code>-</code></em>). A good pattern is to use a combination of your company name and an app identifier.</li>
-<li>The <code>--sku F1</code> argument creates the web app on the Free pricing tier. Omit this argument to use a faster premium tier, which incurs an hourly cost.</li>
-<li>The <code>--html</code> argument says to treat all folder content as static content and disable build automation.</li>
-<li>You can optionally include the argument <code>--location &lt;location-name&gt;</code> where <code>&lt;location-name&gt;</code> is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the <a href="/cli/azure/appservice#az-appservice-list-locations"><code>az account list-locations</code></a> command.</li>
-</ul>
-</details>
-
-The command may take a few minutes to complete.
-
-<details>
-<summary>What's <code>az webapp up</code> doing?</summary>
-<p>The <code>az webapp up</code> command does the following actions:</p>
-<ul>
-<li>Create a default resource group.</li>
-<li>Create a default App Service plan.</li>
-<li><a href="/cli/azure/webapp#az-webapp-create">Create an App Service app</a> with the specified name.</li>
-<li><a href="/azure/app-service/deploy-zip">Zip deploy</a> files from the current working directory to the app.</li>
-<li>While running, it provides messages about resource creation, logging, and ZIP deployment.</li>
-</ul>
-
-When it finishes, it displays information similar to the following example:
-
-```output
-{
- "app_url": "https://&lt;app_name&gt;.azurewebsites.net",
- "location": "westeurope",
- "name": "&lt;app_name&gt;",
- "os": "Windows",
- "resourcegroup": "appsvc_rg_Windows_westeurope",
- "serverfarm": "appsvc_asp_Windows_westeurope",
- "sku": "FREE",
- "src_path": "/home/&lt;username&gt;/quickstart/html-docs-hello-world ",
- &lt; JSON data removed for brevity. &gt;
-}
-```
-
-</details>
-
-You will need the `resourceGroup` value to [clean up resources](#6-clean-up-resources) later.
-
-<hr/>
-
-## 3. Browse to the app
-
-In a browser, go to the app URL: `http://<app_name>.azurewebsites.net`.
-
-The page is running as an Azure App Service web app.
-
-![Sample app home page](media/quickstart-html/hello-world-in-browser-az.png)
-
-<hr/>
-
-## 4. Update and redeploy the app
-
-In the Cloud Shell, use `sed` to change "Azure App Service - Sample Static HTML Site" to "Azure App Service".
-
-```bash
-sed -i 's/Azure App Service - Sample Static HTML Site/Azure App Service/' https://docsupdatetracker.net/index.html
-```
-
-Redeploy the app with `az webapp up` command.
-
-```azurecli
-az webapp up --html
-```
-
-Switch back to the browser window that opened in the **Browse to the app** step.
-
-**Refresh** the page.
-
-![Updated sample app home page](media/quickstart-html/hello-azure-in-browser-az.png)
-
-<hr/>
-
-## 5. Manage your new Azure app
-
-**Navigate** to the [Azure portal](https://portal.azure.com).,
-
-**Search** for and **select** **App Services**.
-
-![Select App Services in the Azure portal](./media/quickstart-html/portal0.png)
-
-**Select** the name of your Azure app.
-
-![Portal navigation to Azure app](./media/quickstart-html/portal1.png)
-
-You see your web app's Overview page. Here, you can perform basic management tasks like browse, stop, start, restart, and delete.
-
-![App Service blade in Azure portal](./media/quickstart-html/portal2.png)
-
-The left menu provides different pages for configuring your app.
-
-<hr/>
-
-## 6. Clean up resources
-
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell. Remember that the resource group name was automatically generated for you in the [create a web app](#2-create-a-web-app) step.
-
-```azurecli
-az group delete --name appsvc_rg_Windows_westeurope
-```
-
-This command may take a minute to run.
-
-<hr/>
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
- Title: 'QuickStart: Create a static HTML web app'
-description: Deploy your first HTML Hello World to Azure App Service in minutes. You deploy using Git, which is one of many ways to deploy to App Service.
-- Previously updated : 11/18/2022----
-# Create a static HTML web app in Azure
-
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to deploy a basic HTML+CSS site to Azure App Service. You'll complete this quickstart in [Cloud Shell](../cloud-shell/overview.md), but you can also run these commands locally with [Azure CLI](/cli/azure/install-azure-cli).
-
-> [!NOTE]
-> For information regarding hosting static HTML files in a serverless environment, please see [Static Web Apps](../static-web-apps/overview.md).
----
-## Download the sample
-
-In the Cloud Shell, create a quickstart directory and then change to it.
-
-```bash
-mkdir quickstart
-
-cd $HOME/quickstart
-```
-
-Next, run the following command to clone the sample app repository to your quickstart directory.
-
-```bash
-git clone https://github.com/Azure-Samples/html-docs-hello-world.git
-```
-
-## Create a web app
-
-Change to the directory that contains the sample code and run the [az webapp up](/cli/azure/webapp#az-webapp-up) command. In the following example, replace <app_name> with a unique app name. Static content is indicated by the `--html` flag.
-
-```azurecli
-cd html-docs-hello-world
-
-az webapp up --location westeurope --name <app_name> --html
-```
-> [!NOTE]
-> If you want to host your static content on a Linux based App Service instance configure PHP as your runtime using the `--runtime` and `--os-type` flags:
->
-> `az webapp up --location westeurope --name <app_name> --runtime "PHP:8.1" --os-type linux`
->
-> The PHP container includes a web server that is suitable to host static HTML content.
---
-The `az webapp up` command does the following actions:
--- Create a default resource group.--- Create a default app service plan.--- Create an app with the specified name.--- [Zip deploy](./deploy-zip.md) files from the current working directory to the web app.-
-This command may take a few minutes to run. While running, it displays information similar to the following example:
-
-```output
-{
- "app_url": "https://&lt;app_name&gt;.azurewebsites.net",
- "location": "westeurope",
- "name": "&lt;app_name&gt;",
- "os": "Windows",
- "resourcegroup": "appsvc_rg_Windows_westeurope",
- "serverfarm": "appsvc_asp_Windows_westeurope",
- "sku": "FREE",
- "src_path": "/home/&lt;username&gt;/quickstart/html-docs-hello-world ",
- &lt; JSON data removed for brevity. &gt;
-}
-```
-
-Make a note of the `resourceGroup` value. You need it for the [clean up resources](#clean-up-resources) section.
-
-## Browse to the app
-
-In a browser, go to the app URL: `http://<app_name>.azurewebsites.net`.
-
-The page is running as an Azure App Service web app.
--
-**Congratulations!** You've deployed your first HTML app to App Service.
-
-## Update and redeploy the app
-
-In the Cloud Shell, use `sed` to change "Azure App Service - Sample Static HTML Site" to "Azure App Service".
-
-```bash
-sed -i 's/Azure App Service - Sample Static HTML Site/Azure App Service/' https://docsupdatetracker.net/index.html
-```
-
-You'll now redeploy the app with the same `az webapp up` command.
-
-```azurecli
-az webapp up --location westeurope --name <app_name> --html
-```
-
-Once deployment has completed, switch back to the browser window that opened in the **Browse to the app** step, and refresh the page.
--
-## Manage your new Azure app
-
-To manage the web app you created, in the [Azure portal](https://portal.azure.com), search for and select **App Services**.
-
-![Select App Services in the Azure portal](./media/quickstart-html/portal0.png)
-
-On the **App Services** page, select the name of your Azure app.
-
-![Portal navigation to Azure app](./media/quickstart-html/portal1.png)
-
-You see your web app's Overview page. Here, you can perform basic management tasks like browse, stop, start, restart, and delete.
-
-![App Service blade in Azure portal](./media/quickstart-html/portal2.png)
-
-The left menu provides different pages for configuring your app.
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell. Remember that the resource group name was automatically generated for you in the [create a web app](#create-a-web-app) step.
-
-```azurecli
-az group delete --name appsvc_rg_Windows_westeurope
-```
-
-This command may take a minute to run.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
app-service Scenario Secure App Authentication App Service As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service-as-user.md
- Title: Tutorial - Add user authentication to a web app on Azure App Service | Azure
-description: In this tutorial, you learn how to enable user authentication and authorization for a web app running on Azure App Service. Limit access to the web app to users in your organizationΓÇï.
------- Previously updated : 02/25/2022---
-#Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.
---
-# Tutorial: Add user authentication to your web app running on Azure App Service
--
-## Connect to backend services as user
-
-User authentication can begin with authenticating the user to your app service as described in the previous section.
--
-Once the app service has the authenticated identity, your system needs to **connect to backend services as the user**:
-
-* A database example is a SQL database which imposes its own security for that identity on tables
-
-* A storage example is Blob Storage which imposes its own security for that identity on containers and blobs
-
-* A user needs access to Microsoft Graph to access their own email.
--
-> [!div class="nextstepaction"]
-> [App service accesses Graph](scenario-secure-app-authentication-app-service-as-user.md)
applied-ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/changelog-release-history.md
Last updated 04/24/2023
-recommendations: false
<!-- markdownlint-disable MD001 -->
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
Last updated 10/14/2022 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Accuracy and confidence scores for custom models
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
Last updated 04/25/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept Analyze Document Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-analyze-document-response.md
Last updated 12/15/2022
monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Analyze document API response
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Last updated 03/03/2023
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Last updated 03/03/2023
-recommendations: false
# Composed custom models
applied-ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md
Last updated 04/25/2023
monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Custom classification model
applied-ai-services Concept Custom Label Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-label-tips.md
Last updated 01/30/2023
-recommendations: false
# Tips for labeling custom model datasets
applied-ai-services Concept Custom Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-label.md
Last updated 03/03/2023
monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Best practices: Generating Form Recognizer labeled dataset
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Last updated 03/03/2023
monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Custom neural document model
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Last updated 12/07/2022
-recommendations: false
# Custom template document model
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Last updated 03/03/2023 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Azure Form Recognizer Custom document models
applied-ai-services Concept Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
Last updated 03/03/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Form Recognizer Studio
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Last updated 03/15/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Last updated 03/03/2023
-recommendations: false
+ <!-- markdownlint-disable MD033 -->
applied-ai-services Concept Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-insurance-card.md
Last updated 03/03/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Azure Form Recognizer health insurance card model
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Last updated 02/13/2023
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Last updated 03/15/2023
-recommendations: false
# Azure Form Recognizer layout model
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Last updated 03/03/2023
-recommendations: false
<!-- markdownlint-disable MD024 -->
applied-ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-query-fields.md
Last updated 04/25/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Last updated 03/15/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Form Recognizer read (OCR) model
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Last updated 03/03/2023
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Last updated 11/10/2022 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Form Recognizer W-2 form model
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Last updated 11/29/2022 monikerRange: 'form-recog-2.1.0'
-recommendations: false
# Configure Form Recognizer containers
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Last updated 01/23/2023 monikerRange: 'form-recog-2.1.0'
-recommendations: false
# Form Recognizer container image tags and release notes
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Last updated 03/20/2023
-recommendations: false
# Install and run Form Recognizer containers
applied-ai-services Form Recognizer Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md
Last updated 03/02/2023 monikerRange: 'form-recog-2.1.0'
-recommendations: false
# Form Recognizer containers in disconnected environments
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Last updated 04/17/2023 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
-#Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal.
# Create a Form Recognizer resource
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
Last updated 10/26/2022 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Create SAS tokens for storage containers
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
Last updated 01/09/2023 monikerRange: 'form-recog-2.1.0'
-recommendations: false
# Deploy the Sample Labeling tool
applied-ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/encrypt-data-at-rest.md
Last updated 10/20/2022
monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Form Recognizer encryption of data at rest
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
Last updated 04/25/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Build and train a custom classification model (preview)
applied-ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-model.md
Last updated 01/31/2023
-recommendations: false
# Build and train a custom model
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/compose-custom-models.md
Last updated 03/03/2023
-recommendations: false
# Compose custom models
applied-ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/estimate-cost.md
Last updated 10/20/2022 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Check my Form Recognizer usage and estimate the price
applied-ai-services Project Share Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/project-share-custom-classifier.md
Last updated 04/17/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Share custom model projects using Form Recognizer Studio
applied-ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api.md
Last updated 03/03/2023 zone_pivot_groups: programming-languages-set-formre
-recommendations: false
<!-- markdownlint-disable MD051 -->
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Last updated 03/03/2023 monikerRange: 'form-recog-2.1.0'
-recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 -->
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
Last updated 03/03/2023 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Configure secure access with managed identities and private endpoints
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Last updated 03/17/2023 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Managed identities for Form Recognizer
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Last updated 03/03/2023
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api.md
Last updated 11/18/2022 zone_pivot_groups: programming-languages-set-formre
-recommendations: false
# Get started with Form Recognizer
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Last updated 10/10/2022 monikerRange: 'form-recog-2.1.0'
-recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 -->
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Last updated 04/25/2023
-recommendations: false
<!-- markdownlint-disable MD024 -->
applied-ai-services Sdk Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-preview.md
Last updated 04/25/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
<!-- markdownlint-disable MD024 -->
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Last updated 03/03/2023
-recommendations: false
# Form Recognizer service quotas and limits
applied-ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/studio-overview.md
Last updated 02/14/2023 monikerRange: 'form-recog-3.0.0'
-recommendations: false
<!-- markdownlint-disable MD033 -->
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
Last updated 01/09/2023
#Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way. monikerRange: 'form-recog-2.1.0'
-recommendations: false
# Train models with the sample-labeling tool
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Last updated 08/22/2022 monikerRange: 'form-recog-2.1.0'
-recommendations: false
-#Customer intent: As a form-processing software developer, I want to learn how to use the Form Recognizer service with Logic Apps.
# Tutorial: Use Azure Logic Apps with Form Recognizer
applied-ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md
Last updated 10/07/2022 monikerRange: 'form-recog-3.0.0'
-recommendations: false
# Form Recognizer error guide v3.0
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Last updated 10/20/2022 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
# Form Recognizer v3.0 migration
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Last updated 03/15/2023 monikerRange: '>=form-recog-2.1.0'
-recommendations: false
<!-- markdownlint-disable MD024 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
>[!NOTE] > With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
+## April 2023
+
+**Announcing the latest Azure Form Recognizer client-library public preview release**
+
+* The public preview release SDKs are supported by Form Recognizer REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument). This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) SDKs:
+
+ * [**Custom classification model**](concept-custom-classifier.md)
+
+ * [**Query fields extraction**](concept-query-fields.md)
+
+ * [**Add-on capabilities**](concept-add-on-capabilities.md)
+
+* For more information _see_, [**Form Recognizer SDK (public preview**)](./sdk-preview.md) and [March 2023 release](#march-2023) notes.
+ ## March 2023 > [!IMPORTANT]
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 04/04/2023 Last updated : 05/08/2023
The Azure Automation Process Automation feature supports several types of runboo
|: |: | | [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 5.1 (GA), PowerShell 7.1 (preview), and PowerShell 7.2 (preview).| | [PowerShell Workflow](#powershell-workflow-runbooks)|Textual runbook based on Windows PowerShell Workflow scripting. |
-| [Python](#python-runbooks) |Textual runbook based on Python scripting. The currently supported versions are: Python 2.7 (GA), Python 3.8 (preview), and Python 3.10 (preview). |
+| [Python](#python-runbooks) |Textual runbook based on Python scripting. The currently supported versions are: Python 2.7 (GA), Python 3.8 (GA), and Python 3.10 (preview). |
| [Graphical](#graphical-runbooks)|Graphical runbook based on Windows PowerShell and created and edited completely in the graphical editor in Azure portal. | | [Graphical PowerShell Workflow](#graphical-runbooks)|Graphical runbook based on Windows PowerShell Workflow and created and edited completely in the graphical editor in Azure portal. |
The following are the current limitations and known issues with PowerShell runbo
**Limitations** - You must be familiar with PowerShell scripting.-- The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your Python runbook to access the Automation account shared resources (assets) functions.+
+- The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your PowerShell runbook to access the Automation account shared resources (assets) functions.
- For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules. - *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.-- PowerShell 7.x doesn't support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
+- PowerShell 7.x doesn't support workflows. For more information, see [PowerShell Workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
- PowerShell 7.x currently doesn't support signed runbooks. - Source control integration doesn't support PowerShell 7.1 (preview) Also, PowerShell 7.1 (preview) runbooks in source control gets created in Automation account as Runtime 5.1. - PowerShell 7.1 module management isn't supported through `Get-AzAutomationModule` cmdlets.-- Runbook will fail with no log trace if the input value contains the character '.-
+- Runbook fails with no log trace if the input value contains the character '.
**Known issues** - Executing child scripts using `.\child-runbook.ps1` isn't supported in this preview.
- **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
-- Runbook properties defining logging preference is not supported in PowerShell 7 runtime.
- **Workaround**: Explicitly set the preference at the start of the runbook as below -
+ **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from `Az.Automation` module) to start another runbook from parent runbook.
+- Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
+ **Workaround**: Explicitly set the preference at the start of the runbook as following -
+ ``` $VerbosePreference = "Continue"
The following are the current limitations and known issues with PowerShell runbo
``` - Avoid importing `Az.Accounts` module to version 2.4.0 version for PowerShell 7 runtime as there can be an unexpected behavior using this version in Azure Automation. - You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime.-- When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0.+
+- When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az PowerShell module.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0.
+ - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. - We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.
The following are the current limitations and known issues with PowerShell runbo
- You must be familiar with PowerShell scripting. - For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules. - *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.-- PowerShell 7.x doesn't support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
+- PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
- PowerShell 7.x currently doesn't support signed runbooks. - Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control get created in Automation account as Runtime 5.1.-- Logging job operations to the Log Analytics workspace through linked workspace or diagnostics settings are not supported.-- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell isn't supported.
+- Currently, only cloud jobs are supported for PowerShell 7.2 (preview) runtime versions.
+- Logging job operations to the Log Analytics workspace through linked workspace or diagnostics settings aren't supported.
+- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.
- Az module 8.3.0 is installed by default and can't be managed at the automation account level. Use custom modules to override the Az module to the desired version. - The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution. - PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets.
The following are the current limitations and known issues with PowerShell runbo
- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.-- Runbook properties defining logging preference is not supported in PowerShell 7 runtime.
- **Workaround**: Explicitly set the preference at the start of the runbook as below -
+- Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
+ **Workaround**: Explicitly set the preference at the start of the runbook as following -
``` $VerbosePreference = "Continue"
PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Work
## Python runbooks
-Python runbooks compile under Python 2, Python 3.8 (preview) and Python 3.10 (preview). You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
+Python runbooks compile under Python 2.7(GA), Python 3.8 (GA) and Python 3.10 (preview). You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
Currently, Python 3.10 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil SouthEast, Central India, West India, UAE Central, and Gov clouds.
Currently, Python 3.10 (preview) runtime version is supported for both Cloud and
- Uses the robust Python libraries. - Can run in Azure or on Hybrid Runbook Workers.-- For Python 2, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed.-- For Python 3.8 (preview) Cloud Jobs, Python 3.8 (preview) version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions.-- For Python 3.8 (preview) Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use.-- For Python 3.8 (preview) Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
+- For Python 2.7, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed.
+- For Python 3.8 Cloud Jobs, Python 3.8 version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions.
+- For Python 3.8 Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use.
+- For Python 3.8 Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
### Limitations
Following are the limitations of Python runbooks
- For Python 2.7.12 modules, use wheel files cp27-amd6. - To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. - Azure Automation doesn't supportΓÇ»**sys.stderr**.-- The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
+- The Python **automationassets** package isn't available on pypi.org, so it's not available for import onto a Windows machine.
-# [Python 3.8 (preview)](#tab/py38)
+# [Python 3.8 (GA)](#tab/py38)
- You must be familiar with Python scripting.-- For Python 3.8 (preview) modules, use wheel files targeting cp38-amd64.
+- For Python 3.8 modules, use wheel files targeting cp38-amd64.
- To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account.-- Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 (preview) runbook (preview) doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
+- Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 runbook doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
- Azure Automation doesn't supportΓÇ»**sys.stderr**. - The Python **automationassets** package isn't available on pypi.org, so it's not available for import onto a Windows machine.
Following are the limitations of Python runbooks
**Limitations** -- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md).-- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if necessary dependencies of packages are not imported into automation account.-- Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell isn't supported.
+- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md)
+- Currently, only cloud jobs are supported for Python 3.10 (preview) runtime versions.
+- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages aren't imported into automation account.
+- Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.
automation Create Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/create-run-as-account.md
- Title: Create an Azure Automation Run As account
-description: This article tells how to create an Azure Automation Run As account with PowerShell or from the Azure portal.
-- Previously updated : 05/17/2021---
-# How to create an Azure Automation Run As account
-
-> [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
--
-Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to create a Run As or Classic Run As account from the Azure portal or Azure PowerShell.
-
-When you create the Run As or Classic Run As account in the Azure portal, by default it uses a self-signed certificate. If you want to use a certificate issued by your enterprise or third-party certification authority (CA), can use the [PowerShell script to create a Run As account](#powershell-script-to-create-a-run-as-account).
-
-## Create account in Azure portal
-
-Perform the following steps to update your Azure Automation account in the Azure portal. The Run As and Classic Run As accounts are created separately. If you don't need to manage classic resources, you can just create the Azure Run As account.
-
-1. Sign in to the Azure portal with an account that is a member of the Subscription Admins role and co-administrator of the subscription.
-
-2. Search for and select **Automation Accounts**.
-
-3. On the **Automation Accounts** page, select your Automation account from the list.
-
-4. In the left pane, select **Run As Accounts** in the **Account Settings** section.
-
- :::image type="content" source="media/create-run-as-account/automation-account-properties-pane.png" alt-text="Select the Run As Account option.":::
-
-5. Depending on the account you require, use the **+ Azure Run As Account** or **+ Azure Classic Run As Account** pane. After reviewing the overview information, click **Create**.
-
- :::image type="content" source="media/create-run-as-account/automation-account-create-run-as.png" alt-text="Select the option to create a Run As Account":::
-
-6. While Azure creates the Run As account, you can track the progress under **Notifications** from the menu. A banner is also displayed stating that the account is being created. The process can take a few minutes to complete.
-
-## Create account using PowerShell
-
-The following list provides the requirements to create a Run As account in PowerShell using a provided script. These requirements apply to both types of Run As accounts.
-
-* Windows 10 or Windows Server 2016 with Azure Resource Manager modules 3.4.1 and later. The PowerShell script doesn't support earlier versions of Windows.
-* Azure PowerShell PowerShell 6.2.4 or later. For information, see [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
-* An Automation account, which is referenced as the value for the `AutomationAccountName` and `ApplicationDisplayName` parameters.
-* Permissions equivalent to the ones listed in [Required permissions to configure Run As accounts](automation-security-overview.md#permissions).
-
-If you are planning to use a certificate from your enterprise or third-party certificate authority (CA), Automation requires the certificate to have the following configuration:
-
- * Specify the provider **Microsoft Enhanced RSA and AES Cryptographic Provider**
- * Marked as exportable
- * Configured to use the SHA256 algorithm
- * Saved in the `*.pfx` or `*.cer` format.
-
-To get the values for `AutomationAccountName`, `SubscriptionId`, and `ResourceGroupName`, which are required parameters for the PowerShell script, complete the following steps.
-
-1. Sign in to the Azure portal.
-
-1. Search for and select **Automation Accounts**.
-
-1. On the Automation Accounts page, select your Automation account from the list.
-
-1. In the left pane, select **Properties**.
-
-1. Note the values for **Name**, **Subscription ID**, and **Resource Group** on the **Properties** page.
-
- ![Automation account properties page](media/create-run-as-account/automation-account-properties.png)
-
-### PowerShell script to create a Run As account
-
-The PowerShell script includes support for several configurations.
-
-* Create a Run As account and/or a Classic Run As account by using a self-signed certificate.
-* Create a Run As account and/or a Classic Run As account by using a certificate issued by your enterprise or third-party certification authority (CA).
-* Create a Run As account and/or a Classic Run As account by using a self-signed certificate in the Azure Government cloud.
-
-1. Download and save the script to a local folder using the following command.
-
- ```powershell
- wget https://raw.githubusercontent.com/azureautomation/runbooks/master/Utility/AzRunAs/Create-RunAsAccount.ps1 -outfile Create-RunAsAccount.ps1
- ```
-
-2. Start PowerShell with elevated user rights and navigate to the folder that contains the script.
-
-3. Run one of the following commands to create a Run As and/or Classic Run As account based on your requirements.
-
- * Create a Run As account using a self-signed certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $false
- ```
-
- * Create a Run As account and a Classic Run As account by using a self-signed certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true
- ```
-
- * Create a Run As account and a Classic Run As account by using an enterprise certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnterpriseCertPathForRunAsAccount <EnterpriseCertPfxPathForRunAsAccount> -EnterpriseCertPlainPasswordForRunAsAccount <StrongPassword> -EnterpriseCertPathForClassicRunAsAccount <EnterpriseCertPfxPathForClassicRunAsAccount> -EnterpriseCertPlainPasswordForClassicRunAsAccount <StrongPassword>
- ```
-
- If you've created a Classic Run As account with an enterprise public certificate (.cer file), use this certificate. The script creates and saves it to the temporary files folder on your computer, under the user profile `%USERPROFILE%\AppData\Local\Temp` you used to execute the PowerShell session. See [Uploading a management API certificate to the Azure portal](../cloud-services/cloud-services-configure-ssl-certificate-portal.md).
-
- * Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnvironmentName AzureUSGovernment
- ```
-
-4. After the script has executed, you're prompted to authenticate with Azure. Sign in with an account that's a member of the subscription administrators role. If you are creating a Classic Run As account, your account must be a co-administrator of the subscription.
-
-## Next steps
-
-* To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](./learn/powershell-runbook-managed-identity.md).
-
-* To get started with a Python 3 runbook, see [Tutorial: Create a Python 3 runbook](learn/automation-tutorial-runbook-textual-python-3.md).
automation Default Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/default-python-packages.md
+
+ Title: Default Python packages in Azure Automation
+description: List of default Python packages in Automation service.
Last updated : 03/15/2023+++
+# Default Python packages
+
+The following is the list of default Python packages:
+
+| **Package Name** | **Version** |
+| | |
+|adal |ΓÇ» 1.2.2 |
+|azure_applicationinsights | 0.1.0 |
+|azure_batch | 4.1.3 |
+|azure_common | 1.1.23 |
+|azure_cosmosdb_nspkg | 2.0.2 |
+|azure_cosmosdb_table | 1.0.6 |
+|azure_datalake_store | 0.0.48 |
+|azure_eventgrid | 1.3.0 |
+|azure_graphrbac |0.40.0 |
+|azure_keyvault | 1.1.0 |
+|azure_loganalytics| 0.1.0 |
+|azure_mgmt_advisor | 1.0.1 |
+|azure_mgmt_applicationinsights | 0.1.1 |
+|azure_mgmt_authorization | 0.50.0 |
+|azure_mgmt_batch | 5.0.1 |
+|azure_mgmt_batchai | 2.0.0 |
+azure_mgmt_billing |ΓÇ» 0.2.0 |
+|azure_mgmt_cdn |ΓÇ» 3.1.0 |
+|azure_mgmt_cognitiveservices |ΓÇ» 3.0.0 |
+|azure_mgmt_commerce |ΓÇ» 1.0.1 |
+|azure_mgmt_compute |ΓÇ» 4.6.2 |
+|azure_mgmt_consumption |ΓÇ» 2.0.0 |
+|azure_mgmt_containerinstance |ΓÇ» 1.5.0 |
+|azure_mgmt_containerregistry |ΓÇ» 2.8.0 |
+|azure_mgmt_containerservice |ΓÇ» 4.4.0 |
+|azure_mgmt_cosmosdb |ΓÇ» 0.4.1 |
+|azure_mgmt_datafactory |ΓÇ» 0.6.0 |
+|azure_mgmt_datalake_analytics |ΓÇ» 0.6.0 |
+|azure_mgmt_datalake_nspkg |ΓÇ» 3.0.1 |
+|azure_mgmt_datalake_store |ΓÇ» 0.5.0 |
+|azure_mgmt_datamigration |ΓÇ» 1.0.0 |
+|azure_mgmt_devspaces |ΓÇ» 0.1.0 |
+|azure_mgmt_devtestlabs |ΓÇ» 2.2.0 |
+|azure_mgmt_dns |ΓÇ» 2.1.0 |
+|azure_mgmt_eventgrid |ΓÇ» 1.0.0 |
+|azure_mgmt_eventhub |ΓÇ» 2.6.0 |
+|azure_mgmt_hanaonazure |ΓÇ» 0.1.1 |
+|azure_mgmt_iotcentral |ΓÇ» 0.1.0
+|azure_mgmt_iothub |ΓÇ» 0.5.0
+|azure_mgmt_iothubprovisioningservices |ΓÇ» 0.2.0 |
+|azure_mgmt_keyvault |ΓÇ» 1.1.0 |
+|azure_mgmt_loganalytics |ΓÇ» 0.2.0 |
+|azure_mgmt_logic |ΓÇ» 3.0.0 |
+|azure_mgmt_machinelearningcompute |ΓÇ» 0.4.1 |
+|azure_mgmt_managementgroups |ΓÇ» 0.1.0 |
+|azure_mgmt_managementpartner |ΓÇ» 0.1.1 |
+|azure_mgmt_maps |ΓÇ» 0.1.0 |
+|azure_mgmt_marketplaceordering |ΓÇ» 0.1.0 |
+|azure_mgmt_media |ΓÇ» 1.0.0 |
+|azure_mgmt_monitor |ΓÇ» 0.5.2 |
+|azure_mgmt_msi |ΓÇ» 0.2.0 |
+|azure_mgmt_network |ΓÇ» 2.7.0 |
+|azure_mgmt_notificationhubs |ΓÇ» 2.1.0 |
+|azure_mgmt_nspkg |ΓÇ» 3.0.2 |
+|azure_mgmt_policyinsights |ΓÇ» 0.1.0 |
+|azure_mgmt_powerbiembedded |ΓÇ» 2.0.0 |
+|azure_mgmt_rdbms |ΓÇ» 1.9.0 |
+|azure_mgmt_recoveryservices |ΓÇ» 0.3.0 |
+|azure_mgmt_recoveryservicesbackup |ΓÇ» 0.3.0 |
+|azure_mgmt_redis |ΓÇ» 5.0.0 |
+|azure_mgmt_relay |ΓÇ» 0.1.0 |
+|azure_mgmt_reservations |ΓÇ» 0.2.1 |
+|azure_mgmt_resource |ΓÇ» 2.2.0 |
+|azure_mgmt_scheduler |ΓÇ» 2.0.0 |
+|azure_mgmt_search |ΓÇ» 2.1.0 |
+|azure_mgmt_servicebus |ΓÇ» 0.5.3|
+|azure_mgmt_servicefabric |ΓÇ» 0.2.0 |
+|azure_mgmt_signalr |ΓÇ» 0.1.1 |
+|azure_mgmt_sql |ΓÇ» 0.9.1 |
+|azure_mgmt_storage |ΓÇ» 2.0.0 |
+|azure_mgmt_subscription |ΓÇ» 0.2.0 |
+|azure_mgmt_trafficmanager |ΓÇ» 0.50.0 |
+|azure_mgmt_web |ΓÇ» 0.35.0 |
+|azure_mgmt |ΓÇ» 4.0.0 |
+|azure_nspkg |ΓÇ» 3.0.2 |
+|azure_servicebus |ΓÇ» 0.21.1 |
+|azure_servicefabric |ΓÇ» 6.3.0.0 |
+|azure_servicemanagement_legacy |ΓÇ» 0.20.6 |
+|azure_storage_blob |ΓÇ» 1.5.0 |
+|azure_storage_common |ΓÇ» 1.4.2|
+|azure_storage_file |ΓÇ» 1.4.0 |
+|azure_storage_queue |ΓÇ» 1.4.0 |
+|azure |ΓÇ» 4.0.0 |
+|certifi |ΓÇ» 2019.11.28 |
+|cffi |ΓÇ» 1.13.2 |
+|chardet |ΓÇ» 3.0.4 |
+|cryptography |ΓÇ» 2.8 |
+|idna |ΓÇ» 2.8 |
+|isodate |ΓÇ» 0.6.0 |
+|msrest |ΓÇ» 0.6.10 |
+|msrestazure |ΓÇ» 0.6.2 |
+|oauthlib |ΓÇ» 3.1.0 |
+|pip |ΓÇ» 20.1.1 |
+|pycryptodome |ΓÇ» 3.9.7|
+|PyJWT |ΓÇ» 1.7.1 |
+|pyOpenSSL |ΓÇ» 19.1.0 |
+|python_dateutil |ΓÇ» 2.8.1 |
+|requests_oauthlib |ΓÇ» 1.3.0|
+|requests |ΓÇ» 2.22.0 |
+|setuptools |ΓÇ» 41.4.0 |
+|six |ΓÇ» 1.13.0 |
+|sqlite_bro |ΓÇ» 0.9.1 |
+|urllib3 |ΓÇ» 1.25.7 |
+|wheel |ΓÇ» 0.34.2 |
+
automation Python 3 Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-3-packages.md
Title: Manage Python 3 packages in Azure Automation
-description: This article tells how to manage Python 3 packages (preview) in Azure Automation.
+description: This article tells how to manage Python 3 packages in Azure Automation.
Previously updated : 03/29/2023 Last updated : 05/08/2023
-# Manage Python 3 packages (preview) in Azure Automation
+# Manage Python 3 packages in Azure Automation
-This article describes how to import, manage, and use Python 3 (preview) packages in Azure Automation running on the Azure sandbox environment and Hybrid Runbook Workers. Python packages should be downloaded on Hybrid Runbook workers for successful job execution. To help simplify runbooks, you can use Python packages to import the modules you need.
+This article describes how to import, manage, and use Python 3 packages in Azure Automation running on the Azure sandbox environment and Hybrid Runbook Workers. Python packages should be downloaded on Hybrid Runbook workers for successful job execution. To help simplify runbooks, you can use Python packages to import the modules you need.
For information on managing Python 2 packages, see [Manage Python 2 packages](./python-packages.md). ## Default Python packages
-To support Python 3.8 (preview) runbooks in the Automation service, Azure package 4.0.0 is installed by default in the Automation account. The default version can be overridden by importing Python packages into your Automation account.
+To support Python 3.8 runbooks in the Automation service, some Python packages are installed by default and a [list of these packages are here](default-python-packages.md). The default version can be overridden by importing Python packages into your Automation account.
Preference is given to the imported version in your Automation account. To import a single package, see [Import a package](#import-a-package). To import a package with multiple packages, see [Import a package with dependencies](#import-a-package-with-dependencies).
-There are no default packages installed for Python 3.10 (preview).
+> [!NOTE]
+> There are no default packages installed for Python 3.10 (preview).
## Packages as source files
The [Python Package Index](https://pypi.org/) (PyPI) is a repository of software
Select a Python version:
-#### [Python 3.8 (preview)](#tab/py3)
+#### [Python 3.8(GA)](#tab/py3)
| Filename part | Description | |||
-|cp38|Automation supports **Python 3.8 (preview)** for Cloud jobs.|
+|cp38|Automation supports **Python 3.8** for Cloud jobs.|
|amd64|Azure sandbox processes are **Windows 64-bit** architecture.| For example:
Perform the following steps using a 64-bit Windows machine with Python 3.8.x and
|cp310|Automation supports **Python 3.10 (preview)** for Cloud jobs.| |manylinux_x86_64|Azure sandbox processes are Linux based 64-bit architecture for Python 3.10 (preview) runbooks. - For example: - To import pandas - select a wheel file with a name similar as `pandas-1.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl` - Some Python packages available on PyPI don't provide a wheel file. In this case, download the source (.zip or .tar.gz file) and generate the wheel file using pip. Perform the following steps using a 64-bit Linux machine with Python 3.10.x and wheel package installed:
Perform the following steps using a 64-bit Linux machine with Python 3.10.x and
- ## Import a package 1. In your Automation account, select **Python packages** under **Shared Resources**. Then select **+ Add a Python package**. :::image type="content" source="media/python-3-packages/add-python-3-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted.":::
-1. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file for Python 3.8 (preview) and **.whl** file for Python 3.10 (preview).
-1. Enter a name and select the **Runtime Version** as Python 3.8 (preview) or Python 3.10 (preview).
+1. On the **Add Python Package** page, select a local package to upload. The package can be **.whl** or **.tar.gz** file for Python 3.8 and **.whl** file for Python 3.10 (preview).
+1. Enter a name and select the **Runtime Version** as Python 3.8 or Python 3.10 (preview).
> [!NOTE] > Currently, Python 3.10 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds. 1. Select **Import**.
- :::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3.8 (preview) Package page with an uploaded tar.gz file selected.":::
+ :::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3.8 Package page with an uploaded tar.gz file selected.":::
-After a package has been imported, it's listed on the Python packages page in your Automation account. To remove a package, select the package and click **Delete**.
+After a package has been imported, it's listed on the Python packages page in your Automation account. To remove a package, select the package and select **Delete**.
### Import a package with dependencies
-You can import a Python 3.8 (preview) package and its dependencies by importing the following Python script into a Python 3 runbook, and then running it.
+
+You can import a Python 3.8 package and its dependencies by importing the following Python script into a Python 3.8 runbook, and then running it.
+ ```cmd https://github.com/azureautomation/runbooks/blob/master/Utility/Python/import_py3package_from_pypi.py
For information on importing the runbook, see [Import a runbook from the Azure p
> Currently, importing a runbook from Azure Portal isn't supported for Python 3.10 (preview).
-The **Import a runbook** page defaults the runbook name to match the name of the script. If you have access to the field, you can change the name. **Runbook type** may default to **Python 2**. If it does, make sure to change it to **Python 3**.
+The **Import a runbook** page defaults the runbook name to match the name of the script. If you have access to the field, you can change the name. **Runbook type** may default to **Python 2.7**. If it does, make sure to change it to **Python 3.8**.
:::image type="content" source="media/python-3-packages/import-python-3-package.png" alt-text="Screenshot shows the Python 3 runbook import page.":::
The script (`import_py3package_from_pypi.py`) requires the following parameters.
| Parameter | Description | ||--|
-|subscription_id | Subscription ID of the Automation account |
+| subscription_id | Subscription ID of the Automation account |
| resource_group | Name of the resource group that the Automation account is defined in | | automation_account | Automation account name | | module_name | Name of the module to import from `pypi.org` |
+| module_version | Version of the module |
+
+Parameter value should be provided as a single string in the below format:
+
+-s <subscription_id> -g <resource_group> -a<automation_account> -m <module_name> -v <module_version>
For more information on using parameters with runbooks, see [Work with runbook parameters](start-runbooks.md#work-with-runbook-parameters).
for package in installed_packages_list:
print(package) ```
-### Python 3.8 (preview) PowerShell cmdlets
+### Python 3.8 PowerShell cmdlets
-#### Add new Python 3.8 (preview) package
+#### Add new Python 3.8 package
```python New-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name requires.io -ContentLinkUri https://files.pythonhosted.org/packages/7f/e2/85dfb9f7364cbd7a9213caea0e91fc948da3c912a2b222a3e43bc9cc6432/requires.io-0.2.6-py2.py3-none-any.whl
LastModifiedTime : 9/26/2022 1:37:13 PM +05:30
ProvisioningState : Creating ```
-#### List all Python 3.8 (preview) packages
+#### List all Python 3.8 packages
```python Get-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja
LastModifiedTime : 9/22/2022 5:03:42 PM +05:30
ProvisioningState : Succeeded ```
-#### Remove Python 3.8 (preview) package
+#### Remove Python 3.8 package
```python Remove-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name sockets ```
-#### Update Python 3.8 (preview) package
+#### Update Python 3.8 package
```python Set-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name requires.io -ContentLinkUri https://files.pythonhosted.org/packages/7f/e2/85dfb9f7364cbd7a9213caea0e91fc948da3c912a2b222a3e43bc9cc6432/requires.io-0.2.6-py2.py3-none-any.whl
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
spring.cloud.azure.appconfiguration.stores[0].endpoint=<service_endpoint>
:::zone target="docs" pivot="framework-dotnet"
-You must deploy your app to an Azure service when you use managed identities. Managed identities can't be used for authentication of locally running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-dotnetcore.md?pivots=development-environment-vs&tabs=netcore31#publish-your-web-app).
+You must deploy your app to an Azure service when you use managed identities. Managed identities can't be used for authentication of locally running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-dotnetcore.md?pivots=development-environment-vs&tabs=netcore31#2-publish-your-web-app).
:::zone-end
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes" Previously updated : 03/21/2023 Last updated : 05/08/2023 description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes."
description: "This article provides a conceptual overview of GitOps and configur
> [!IMPORTANT] > The documents in this section are for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
Title: "CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes" Previously updated : 03/21/2023 Last updated : 05/08/2023 description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux"
description: "This article provides a conceptual overview of a CI/CD workflow us
> [!IMPORTANT] > The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
Modern Kubernetes deployments house multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to track cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments.
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 04/27/2023 Last updated : 05/08/2023
Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2
> [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
### Controllers
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues" Previously updated : 04/18/2023 Last updated : 05/08/2023 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
If you've enabled a custom or built-in Azure Gatekeeper Policy that limits the r
### Flux v1 > [!NOTE]
-> We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these Azure CLI commands with `--debug` parameter specified:
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters' description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Previously updated : 03/21/2023 Last updated : 05/08/2023 # Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters
> [!IMPORTANT] > This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll:
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. Previously updated : 03/21/2023 Last updated : 05/08/2023
> [!IMPORTANT] > This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
In this tutorial, you will apply configurations using GitOps on an Azure Arc-enabled Kubernetes cluster. You'll learn how to:
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
Title: "Apply Flux v1 configurations at-scale using Azure Policy" Previously updated : 03/21/2023 Last updated : 05/08/2023 description: "Apply Flux v1 configurations at-scale using Azure Policy"
You can use Azure Policy to apply Flux v1 configurations (`Microsoft.KubernetesC
> [!IMPORTANT] > This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; learn about [using Azure Policy with Flux v2](./use-azure-policy-flux-2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
To use Azure Policy, select a built-in GitOps policy definition and create a policy assignment. When creating the policy assignment: 1. Set the scope for the assignment.
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
Title: "Deploy Helm Charts using GitOps on Azure Arc-enabled Kubernetes cluster" Previously updated : 03/21/2023 Last updated : 05/08/2023 description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration"
description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuratio
> [!IMPORTANT] > This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. >
-> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources. This article shows you how to configure and use Helm with Azure Arc-enabled Kubernetes.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Title: Azure Arc overview description: Learn about what Azure Arc is and how it helps customers enable management and governance of their hybrid resources with other Azure services and features. Previously updated : 03/01/2022 Last updated : 05/04/2023
Azure Arc provides a centralized, unified way to:
* Manage your entire environment together by projecting your existing non-Azure and/or on-premises resources into Azure Resource Manager. * Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
-* Use familiar Azure services and management capabilities, regardless of where they live.
+* Use familiar Azure services and management capabilities, regardless of where your resources live.
* Continue using traditional ITOps while introducing DevOps practices to support new cloud native patterns in your environment. * Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions. Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:
Some of the key scenarios that Azure Arc supports are:
* Manage and govern Kubernetes clusters at scale.
-* Use GitOps to deploy configuration across one or more clusters from Git repositories.
+* [Use GitOps to deploy configurations](kubernetes/conceptual-gitops-flux2.md) across one or more clusters from Git repositories.
* Zero-touch compliance and configuration for Kubernetes clusters using Azure Policy.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 04/19/2023 Last updated : 05/04/2023
Actions of the [yum](https://access.redhat.com/articles/yum-cheat-sheet) command
Actions of the [zypper](https://en.opensuse.org/Portal:Zypper) command, such as installation and removal of packages, are logged in the `/var/log/zypper.log` log file.
+### Facilitating auto-upgrade of the agent
+
+The Azure Connected Machine agent will be supporting an automatic upgrade feature to reduce the agent management overhead associated with Azure Arc-enabled servers. To facilitate this new functionality, a scheduler job is configured on the connected machine. This scheduler job is a scheduled task for Windows and a Cron job for Linux. This scheduler job will appear in the Azure Connected Machine Agent version 1.30 or higher.
+
+To view these scheduler jobs in Windows through PowerShell:
+
+```powershell
+schtasks /query /TN azcmagent
+```
+To view these scheduler jobs in Windows through Task Scheduler:
++
+To view these scheduler jobs in Linux:
+
+```
+cat /etc/cron.d/azcmagent_autoupgrade
+```
+
+To opt-out of any future automatic upgrades or the scheduler jobs, execute the following Azure CLI commands:
+
+For Windows:
+
+```powershell
+az rest --method patch --url https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.HybridCompute/machines/<machineName>?api-version=2022-12-27-preview --resource https://management.azure.com/ --headers Content-Type=application/json --body '{\"properties\": {\"agentUpgrade\": {\"enableAutomaticUpgrade\": false}}}'
+```
+
+For Linux:
+
+```bash
+az rest --method patch --url https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.HybridCompute/machines/<machineName>?api-version=2022-12-27-preview --resource https://management.azure.com/ --headers Content-Type=application/json --body '{"properties": {"agentUpgrade": {"enableAutomaticUpgrade": false}}}'
+```
+ ## Renaming an Azure Arc-enabled server resource When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 05/04/2023 Last updated : 05/06/2023 # Azure guidance for secure isolation
Tenant isolation in Azure AD involves two primary elements:
As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state through the Directory Services API and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through: - Azure AD instances are discrete containers and there's no relationship between them.-- Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.
+- Azure AD data is stored in partitions and each partition has a predetermined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.
- Access isn't permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances. - Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using the Just-In-Time (JIT) privileged access management system. - Azure AD users have no access to physical assets or locations, and therefore it isn't possible for them to bypass the logical Azure RBAC policy checks.
Proper protection and management of cryptographic keys is essential for data sec
The Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable service use from cloud applications and authentication through [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) to allow you to centralize and customize authentication, disaster recovery, high availability, and elasticity. Key Vault supports [cryptographic keys](../key-vault/keys/about-keys.md) of various types, sizes, and curves, including RSA and Elliptic Curve keys. With managed HSMs, support is also available for AES symmetric keys.
-With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key (BYOK)* scenarios, as shown in Figure 3. **Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Key Vault vary depending on the underlying HSM, as explained in online documentation.
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key (BYOK)* scenarios, as shown in Figure 3.
:::image type="content" source="./media/secure-isolation-fig3.png" alt-text="Azure Key Vault support for bring your own key (BYOK)"::: **Figure 3.** Azure Key Vault support for bring your own key (BYOK)
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
+**Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Key Vault vary depending on the underlying HSM, as explained in online documentation.
+
+> [!NOTE]
+> Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents are precluded from accessing, using or extracting any data stored in the service, including cryptographic keys. For more information, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized:
Pico-processes are grouped into isolation units called *sandboxes*. The sandbox
When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process can't abuse the resources from the Host machine.
-In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it's substantially faster than booting a VM while accomplishing logical isolation.
+In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a nonisolated process on Windows, it's substantially faster than booting a VM while accomplishing logical isolation.
A normal Windows process can call more than 1200 functions that result in access to the Windows kernel; however, the entire interface for a pico-process consists of fewer than 50 calls down to the Host. Most application requests for operating system services are handled by the Library OS within the address space of the pico-process. By providing a significantly smaller interface to the kernel, Drawbridge creates a more secure and isolated operating environment in which applications are much less vulnerable to changes in the Host system and incompatibilities introduced by new OS releases. More importantly, a Drawbridge pico-process is a strongly isolated container within which untrusted code from even the most malicious sources can be run without risk of compromising the Host system. The Host assumes that no code running within the pico-process can be trusted. The Host validates all requests from the pico-process with security checks.
In cases where an Azure service is composed of Microsoft-controlled code and cus
### Physical isolation In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation you can use Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer.
+> [!NOTE]
+> Physical tenant isolation increases deployment cost and may not be required in most scenarios given the strong logical isolation assurances provided by Azure.
+ #### Azure Dedicated Host [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](/azure/azure-sql/virtual-machines/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
In addition to robust logical compute isolation available by design to all Azure
You can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing you to opt in to a maintenance window to reduce potential impact to your provisioned services. Most maintenance events have little to no impact on your VMs; however, if you're in a highly regulated industry or with a sensitive workload, you may want to have control over any potential maintenance impact.
-> [!NOTE]
-> Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
-
-Table 5 summarizes the available security guidance for your virtual machines provisioned in Azure.
+Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI. Table 5 summarizes the available security guidance for your virtual machines provisioned in Azure.
**Table 5.** Security guidance for Azure virtual machines
Azure uses several networking implementations to achieve these goals:
- Load balancing to spread traffic uniformly across network paths. - End-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
-These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it isn't possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
+These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single noninterfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it isn't possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
-This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric, creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch.
+This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric, creating the illusion that servers are connected to a large, noninterfering datacenter-wide Layer-2 switch.
The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation):
Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscr
- **Shared symmetric keys** ΓÇô Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. You can rotate and regenerate these keys at any point thereafter without coordination with your applications. - **Azure AD-based authentication** ΓÇô Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).-- **Shared access signatures (SAS)** ΓÇô Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
+- **Shared access signatures (SAS)** ΓÇô Shared access signatures or ΓÇ£presigned URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
- **User delegation SAS** ΓÇô Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device. - **Anonymous public read access** ΓÇô You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy ch
Your data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. You can choose between Microsoft-managed encryption keys (also known as platform-managed encryption keys) or customer-managed encryption keys (CMK). The handling of data encryption and decryption is transparent to customers, as discussed in the next section. ### Data encryption at rest
-Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
+Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs when using both Microsoft-managed encryption keys and customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
> [!NOTE] > If you require extra security and isolation assurances for your most sensitive data stored in Azure services, you can encrypt it using your own encryption keys you control in Azure Key Vault.
Because data encryption is performed by the Storage service, server-side encrypt
#### Azure Disk encryption Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../virtual-machines/disk-encryption-overview.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
-Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it's commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
+Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it's commonly preinstalled on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data can't be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key (BYOK)* scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
The sectors on the physical disk associated with the deleted data become immedia
Customers aren't provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there's no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
-Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it's the SQL Database software that does this enforcement. For Azure Storage, it's the Azure Storage software. For non-durable drives of a VM, it's the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
+Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it's the SQL Database software that does this enforcement. For Azure Storage, it's the Azure Storage software. For nondurable drives of a VM, it's the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under your control (that is, customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and you can choose to have encryption keys under your own control. In this manner, you can also prevent access to your data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
Compared to traditional on-premises hosted systems, Azure provides a greatly **r
PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it's a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that haven't even been detected. This approach makes it more difficult for a compromise to persist.
+#### Side channel attacks
Microsoft has been at the forefront of mitigating **speculative execution side channel attacks** that exploit hardware vulnerabilities in modern processors that use hyper-threading. In many ways, these issues are similar to the Spectre (variant 2) side channel attack, which was disclosed in 2018. Multiple new speculative execution side channel issues were disclosed by both Intel and AMD in 2022. To address these vulnerabilities, Microsoft has developed and optimized Hyper-V **[HyperClear](/virtualization/community/team-blog/2018/20180814-hyper-v-hyperclear-mitigation-for-l1-terminal-fault)**, a comprehensive and high performing side channel vulnerability mitigation architecture. HyperClear relies on three main components to ensure strong inter-VM isolation: - **Core scheduler** to avoid sharing of a CPU coreΓÇÖs private buffers and other resources. - **Virtual-processor address space isolation** to avoid speculative access to another virtual machineΓÇÖs memory or another virtual CPU coreΓÇÖs private state. - **Sensitive data scrubbing** to avoid leaving private data anywhere in hypervisor memory other than within a virtual processorΓÇÖs private address space so that this data can't be speculatively accessed in the future.
-These protections have been deployed to Azure and are available in Windows Server 2016 and later supported releases. The Hyper-V HyperClear architecture has proven to be a readily extensible design that helps provide strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.
+These protections have been deployed to Azure and are available in Windows Server 2016 and later supported releases.
+
+> [!NOTE]
+> The Hyper-V HyperClear architecture has proven to be a readily extensible design that helps provide strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.
+
+When VMs belonging to different customers are running on the same physical server, it's the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. These exploits have received plenty of attention in the academic press where researchers have been seeking to learn more specific information about what's going on in a peer VM.
-When VMs belonging to different customers are running on the same physical server, it's the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as side channel attacks, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. In addition to the previously mentioned Hyper-V HyperClear mitigation architecture that's in use by Azure, there are several extra mitigations in Azure that reduce the risk of such an attack:
+Of particular interest are efforts to learn the **cryptographic keys of a peer VM** by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. In addition to the previously mentioned Hyper-V HyperClear mitigation architecture that's in use by Azure, there are several extra mitigations in Azure that reduce the risk of such an attack:
- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used. - Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM. - All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal. - You can provision VMs on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section. However, physical tenant isolation increases deployment cost and may not be required in most scenarios given the strong logical isolation assurances provided by Azure.
-Overall, PaaS ΓÇô or any workload that auto-creates VMs ΓÇô contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
+Overall, PaaS ΓÇô or any workload that autocreates VMs ΓÇô contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
## Summary A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 04/02/2023 Last updated : 05/06/2023 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
Microsoft Azure cloud environments meet demanding US government compliance requi
- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) Provisional Authorization to Operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB) - [DoD IL2](/azure/compliance/offerings/offering-dod-il2) Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA)
-**Azure Government** maintains the following authorizations that pertain to Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia:
+**Azure Government** maintains the following authorizations that pertain to Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions):
- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) P-ATO issued by the JAB - [DoD IL2](/azure/compliance/offerings/offering-dod-il2) PA issued by DISA
For current Azure Government regions and available services, see [Products avail
> [!NOTE] >
-> - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
-> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
+> - Some Azure services deployed in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
+> - For DoD IL5 PA compliance scope in Azure Government regions US DoD Central and US DoD East (US DoD regions), see **[US DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
**Azure Government Secret** maintains: - [DoD IL6](/azure/compliance/offerings/offering-dod-il6) PA issued by DISA-- [ICD 503](/azure/compliance/offerings/offering-icd-503) ATO with facilities at ICD 705 (for authorization details, contact your Microsoft account representative) - [JSIG PL3](/azure/compliance/offerings/offering-jsig) ATO (for authorization details, contact your Microsoft account representative) **Azure Government Top Secret** maintains:
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
### Terminology used -- Azure Government = Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- Azure Government = Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions)
- FedRAMP High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure Government - DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure Government - DoD IL4 = DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
> [!NOTE] > > - Some services deployed in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
-> - For DoD IL5 PA compliance scope in Azure Government DoD regions US DoD Central and US DoD East (US DoD regions), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
+> - For DoD IL5 PA compliance scope in Azure Government regions US DoD Central and US DoD East (US DoD regions), see **[US DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | | - |::|:-:|:-:|:-:|:-:|
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md
recommendations: false Previously updated : 04/02/2023 Last updated : 05/06/2023 # Department of Defense (DoD) in Azure Government
Azure Government offers the following regions to DoD mission owners and their pa
|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|145| |US DoD Central </br> US DoD East|DoD IL5|60|
-**Azure Government regions** US Gov Arizona, US Gov Texas, and US Gov Virginia (**US Gov regions**) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** US DoD Central and US DoD East (**US DoD regions**) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for US Gov regions vs. US DoD regions. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (**US Gov regions**) are intended for US federal (including DoD), state, and local government agencies, and their partners. Azure Government regions US DoD Central and US DoD East (**US DoD regions**) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for US Gov regions vs. US DoD regions. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
The primary differences between DoD IL5 PAs that are in place for US Gov regions vs. US DoD regions are: - **IL5 compliance scope:** US Gov regions have many more services authorized provisionally at DoD IL5, which in turn enables DoD mission owners and their partners to deploy more realistic applications in these regions. - For a complete list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
- - For a complete list of services in scope for DoD IL5 in US DoD regions, see [Azure Government DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
+ - For a complete list of services in scope for DoD IL5 in US DoD regions, see [US DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
- **IL5 configuration:** US DoD regions are reserved for exclusive DoD use. Therefore, no extra configuration is needed in US DoD regions when deploying Azure services intended for IL5 workloads. In contrast, some Azure services deployed in US Gov regions require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md). > [!NOTE]
The primary differences between DoD IL5 PAs that are in place for US Gov regions
Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
-Hyperscale cloud also offers a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, intelligent edge, and many more to help DoD mission owners implement their mission objectives. Using Azure Government cloud capabilities, you benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads subject to FedRAMP High, DoD IL4, and DoD IL5 requirements.
+Hyper-scale cloud also offers a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, intelligent edge, and many more to help DoD mission owners implement their mission objectives. Using Azure Government cloud capabilities, you benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyper-scale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads subject to FedRAMP High, DoD IL4, and DoD IL5 requirements.
## US Gov regions IL5 audit scope
The following services are in scope for DoD IL5 PA in US DoD regions (US DoD Cen
## Frequently asked questions
-### What are the Azure Government DoD regions?
-Azure Government DoD regions US DoD Central and US DoD East (US DoD regions) are physically separated Azure Government regions reserved for exclusive use by the DoD. They reside on the same isolated network as Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) and use the same identity model. Both the network and identity model are separate from Azure commercial.
+### What are the US DoD regions?
+Azure Government regions US DoD Central and US DoD East (US DoD regions) are physically separated Azure Government regions reserved for exclusive use by the DoD. They reside on the same isolated network as Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) and use the same identity model. Both the network and identity model are separate from Azure commercial.
-### What is the difference between Azure Government and Azure Government DoD regions?
-Azure Government is a US government community cloud providing services for federal, state and local government customers, tribal entities, and other entities subject to various US government regulations such as CJIS, ITAR, and others. All Azure Government regions are designed to meet the security requirements for DoD IL5 workloads. They are deployed on a separate and isolated network and use a separate identity model from Azure commercial regions. US DoD regions achieve DoD IL5 tenant separation requirements by being dedicated exclusively to DoD. In US Gov regions , some services require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
+### What is the difference between US Gov regions and US DoD regions?
+Azure Government is a US government community cloud providing services for federal, state and local government customers, tribal entities, and other entities subject to various US government regulations such as CJIS, ITAR, and others. All Azure Government regions are designed to meet the security requirements for DoD IL5 workloads. They're deployed on a separate and isolated network and use a separate identity model from Azure commercial regions. US DoD regions achieve DoD IL5 tenant separation requirements by being dedicated exclusively to DoD. In US Gov regions, some services require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
### How do US Gov regions support IL5 data? Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Some Azure services deployed in US Gov regions require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
All Azure Government regions are built to support DoD customers, including:
For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). ### What services are part of your IL5 authorization scope?
-For a complete list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in US DoD regions, see [Azure Government DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
+For a complete list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in US DoD regions, see [US DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
## Next steps
azure-government Documentation Government Overview Nerc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-nerc.md
recommendations: false Previously updated : 02/16/2022 Last updated : 05/06/2023 # NERC CIP standards and cloud computing
-This article is intended for electric power utilities and [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx) considering cloud adoption for data and workloads subject to compliance with the North American Electric Reliability Corporation (NERC) [Critical Infrastructure Protection (CIP) standards](https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx).
+This article is intended for electric power utilities and [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx) considering cloud adoption for data and workloads subject to compliance with the North American Electric Reliability Corporation (NERC) [Critical Infrastructure Protection (CIP) standards](https://www.nerc.com/pa/Stand/Pages/default.aspx).
Microsoft makes two different cloud environments available to electric utilities and other registered entities: Azure and Azure Government. Both provide a multi-tenant cloud services platform that registered entities can use to deploy various solutions. A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure and Azure Government use logical isolation to segregate applications and data belonging to different customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously preventing customers from accessing one another's data or applications. This article addresses common security and isolation concerns pertinent to the electric power industry. It also discusses compliance considerations for data and workloads deployed on Azure or Azure Government that are subject to NERC CIP standards. For in-depth technical description of isolation approaches, see [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md).
Both Azure and Azure Government are suitable for registered entities deploying c
The [North American Electric Reliability Corporation (NERC)](https://www.nerc.com/AboutNERC/Pages/default.aspx) is a not-for-profit regulatory authority whose mission is to ensure the reliability of the North American bulk power system. NERC is subject to oversight by the US Federal Energy Regulatory Commission (FERC) and governmental authorities in Canada. In 2006, FERC granted the Electric Reliability Organization (ERO) designation to NERC in accordance with the Energy Policy Act of 2005, as stated in the US Public Law 109-58. NERC has jurisdiction over users, owners, and operators of the bulk power system that serves nearly 400 million people in North America. For more information about NERC ERO Enterprise and NERC regional entities, see [NERC key players](https://www.nerc.com/AboutNERC/keyplayers/Pages/default.aspx).
-NERC develops and enforces reliability standards known as NERC [CIP standards](https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx). In the United States, FERC approved the first set of CIP standards in 2007 and has continued to do so with every new revision. In Canada, the Federal, Provincial, and Territorial Monitoring and Enforcement Subgroup (MESG) develops provincial summaries for making CIP standards enforceable in Canadian jurisdictions.
+NERC develops and enforces reliability standards known as NERC [CIP standards](https://www.nerc.com/pa/Stand/Pages/default.aspx). In the United States, FERC approved the first set of CIP standards in 2007 and has continued to do so with every new revision. In Canada, the Federal, Provincial, and Territorial Monitoring and Enforcement Subgroup (MESG) develops provincial summaries for making CIP standards enforceable in Canadian jurisdictions.
## Azure and Azure Government
If you're a registered entities subject to compliance with NERC CIP standards, y
- [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp) - [NIST SP 800-53](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53) *Security and Privacy Controls for Information Systems and Organizations* - [North American Electric Reliability Corporation](https://www.nerc.com/) (NERC)-- NERC [Critical Infrastructure Protection (CIP) standards](https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx)
+- NERC [Critical Infrastructure Protection (CIP) standards](https://www.nerc.com/pa/Stand/Pages/default.aspx)
- NERC [compliance guidance](https://www.nerc.com/pa/comp/guidance/) - NERC [Glossary of Terms](https://www.nerc.com/pa/Stand/Glossary%20of%20Terms/Glossary_of_Terms.pdf)-- NERC [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx)
+- NERC [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx)
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md
Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
{ "State": "FL", "City": "Orlando",
- "Country": "United States"
+ "countryOrRegion": "United States"
} ] }
Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
"Patients": [ { "Info": {
- "gender": "female",
+ "sex": "female",
"birthDate": "01/01/1987", "ClinicalInfo": [
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Load a map with the same view in Azure Maps along with a map style control and z
Running this code in a browser displays a map that looks like the following image:
-![Simple Azure Maps](media/migrate-google-maps-web-app/simple-azure-maps.png)
+![Simple Azure Maps](media/migrate-google-maps-web-app/simple-azure-maps.jpg)
For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
For more information on supported languages, see [Localization support in Azure
Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
-![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.png)
+![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.jpg)
### Setting the map view
map.setStyle({
}); ```
-![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpeg)
+![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpg)
**More resources:**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps HTML marker](media/migrate-google-maps-web-app/azure-maps-html-marker.png)
+![Azure Maps HTML marker](media/migrate-google-maps-web-app/azure-maps-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
For a Symbol layer, add the data to a data source. Attach the data source to the
</html> ```
-![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.png)
+![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.jpg)
**More resources:**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps custom HTML marker](media/migrate-google-maps-web-app/azure-maps-custom-html-marker.png)
+![Azure Maps custom HTML marker](media/migrate-google-maps-web-app/azure-maps-custom-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
Symbol layers in Azure Maps support custom images as well. First, load the image
</html> ```
-![Azure Maps custom icon symbol layer](media/migrate-google-maps-web-app/azure-maps-custom-icon-symbol-layer.png)</
+![Azure Maps custom icon symbol layer](media/migrate-google-maps-web-app/azure-maps-custom-icon-symbol-layer.jpg)</
> [!TIP] > To render advanced custom points, use multiple rendering layers together. For example, let's say you want to have multiple pushpins that have the same icon on different colored circles. Instead of creating a bunch of images for each color overlay, add a symbol layer on top of a bubble layer. Have the pushpins reference the same data source. This approach will be more efficient than creating and maintaining a bunch of different images.
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.png)
+![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.jpg)
**More resources:**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.png)
+![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.jpg)
**More resources:**
map.events.add('click', marker, function () {
}); ```
-![Azure Maps popup](media/migrate-google-maps-web-app/azure-maps-popup.png)
+![Azure Maps popup](media/migrate-google-maps-web-app/azure-maps-popup.jpg)
> [!NOTE] > You may do the same thing with a symbol, bubble, line or polygon layer by passing the chosen layer to the maps event code instead of a marker.
GeoJSON is the base data type in Azure Maps. Import it into a data source using
</html> ```
-![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.png)
+![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.jpg)
**More resources:**
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
</html> ```
-![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.png)
+![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.jpg)
**More resources:**
Load the GeoJSON data into a data source and connect the data source to a heat m
</html> ```
-![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.png)
+![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.jpg)
**More resources:**
map.layers.add(new atlas.layer.TileLayer({
}), 'labels'); ```
-![Azure Maps tile layer](media/migrate-google-maps-web-app/azure-maps-tile-layer.png)
+![Azure Maps tile layer](media/migrate-google-maps-web-app/azure-maps-tile-layer.jpg)
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
map.setTraffic({
}); ```
-![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.png)
+![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.jpg)
If you select one of the traffic icons in Azure Maps, more information is displayed in a popup.
-![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.png)
+![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.jpg)
**More resources:**
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
</html> ```
-![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.png)
+![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.jpg)
**More resources:**
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To complete this procedure, you need:
## Create a data collection rule
-You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. You can send Windows event and Syslog data to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
+
+> [!NOTE]
+> At this time, Microsoft.HybridCompute ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md)) resources can't be viewed in [Metrics Explorer](../essentials/metrics-getting-started.md) (the Azure portal UX), but they can be acquired via the Metrics REST API (Metric Namespaces - List, Metric Definitions - List, and Metrics - List).
+ > [!NOTE] > To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).+ ### [Portal](#tab/portal) 1. On the **Monitor** menu, select **Data Collection Rules**.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To create the data collection rule in the Azure portal:
1. Specify the following information:
- - **File Pattern** - Identifies where the log files are located on the local disk. You can enter multiple file patterns separated by commas.
+ - **File Pattern** - Identifies where the log files are located on the local disk. You can enter multiple file patterns separated by commas if your AMA is using Fluent Bit v1.5.1 or more.
Examples of valid inputs: - 20220122-MyLog.txt
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
To understand the number of Application Insights resources required to cover you
## How do I use Application Insights?
-Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
+Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) or [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Below is the currently supported list of dependency calls that are automatically
### Java See the list of Application Insights Java's
-[autocollected dependencies](opentelemetry-enable.md?tabs=java#distributed-tracing).
+[autocollected dependencies](opentelemetry-enable.md?tabs=java#included-instrumentation-libraries).
### Node.js
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
+
+ Title: Azure Monitor OpenTelemetry configuration for .NET, Java, Node.js, and Python applications
+description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications.
+ Last updated : 05/10/2023
+ms.devlang: csharp, javascript, typescript, python
+++
+# Azure Monitor OpenTelemetry configuration
+
+This article covers configuration settings for the Azure Monitor OpenTelemetry distro.
+
+> [!TIP]
+> For Node.js, this config guidance applies to the 3.X BETA Package only. If you're using a previous version, see the [Node.js Application Insights SDK Docs](nodejs.md).
+
+## Connection string
+
+A connection string in Application Insights defines the target location for sending telemetry data, ensuring it reaches the appropriate resource for monitoring and analysis.
+
+### [.NET](#tab/net)
+
+Currently unavailable.
+
+### [Java](#tab/java)
+
+For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
+
+### [Node.js](#tab/nodejs)
+
+Use one of the following two ways to configure the connection string:
+
+- Set an environment variable:
+
+ ```console
+ APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String>
+ ```
+
+- Use configuration object:
+
+ ```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const config = new ApplicationInsightsConfig();
+ config.azureMonitorExporterConfig.connectionString = "<Your Connection String>";
+ const appInsights = new ApplicationInsightsClient(config);
+
+ ```
+
+### [Python](#tab/python)
+
+Currently unavailable.
+++
+## Set the Cloud Role Name and the Cloud Role Instance
+
+You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They appear on the Application Map as the name underneath a node.
+
+### [.NET](#tab/net)
+
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+
+```csharp
+// Setting role name and role instance
+var resourceAttributes = new Dictionary<string, object> {
+ { "service.name", "my-service" },
+ { "service.namespace", "my-namespace" },
+ { "service.instance.id", "my-instance" }};
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+// Done setting role name and role instance
+
+// Set ResourceBuilder on the provider.
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetResourceBuilder(resourceBuilder)
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+```
+
+### [Java](#tab/java)
+
+To set the cloud role name, see [cloud role name](java-standalone-config.md#cloud-role-name).
+
+To set the cloud role instance, see [cloud role instance](java-standalone-config.md#cloud-role-instance).
+
+### [Node.js](#tab/nodejs)
+
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+
+```javascript
+...
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const { Resource } = require("@opentelemetry/resources");
+const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");
+// -
+// Setting role name and role instance
+// -
+const config = new ApplicationInsightsConfig();
+config.resource = new Resource({
+ [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
+ [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
+ [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
+});
+const appInsights = new ApplicationInsightsClient(config);
+```
+
+### [Python](#tab/python)
+
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+
+Set Resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` and/or `OTEL_SERVICE_NAME` environment variables. `OTEL_RESOURCE_ATTRIBUTES` takes series of comma-separated key-value pairs. For example, to set the Cloud Role Name to "my-namespace" and set Cloud Role Instance to "my-instance", you can set `OTEL_RESOURCE_ATTRIBUTES` as such:
+```
+export OTEL_RESOURCE_ATTRIBUTES="service.namespace=my-namespace,service.instance.id=my-instance"
+```
+
+If you don't set Cloud Role Name via the "service.namespace" Resource Attribute, you can alternatively set the Cloud Role Name via the `OTEL_SERVICE_NAME` environment variable:
+```
+export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=my-instance"
+export OTEL_SERVICE_NAME="my-namespace"
+```
+++
+## Enable Sampling
+
+You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary).
+
+> [!NOTE]
+> Metrics are unaffected by sampling.
+
+#### [.NET](#tab/net)
+
+The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
+
+In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
+
+```dotnetcli
+dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
+```
+
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .SetSampler(new ApplicationInsightsSampler(0.1F))
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+```
+
+#### [Java](#tab/java)
+
+Starting from 3.4.0, rate-limited sampling is available and is now the default. For more information about sampling, see [Java sampling]( java-standalone-config.md#sampling).
+
+#### [Node.js](#tab/nodejs)
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const config = new ApplicationInsightsConfig();
+config.samplingRatio = 0.75;
+const appInsights = new ApplicationInsightsClient(config);
+```
+
+#### [Python](#tab/python)
+
+The `configure_azure_monitor()` function automatically utilizes
+ApplicationInsightsSampler for compatibility with Application Insights SDKs and
+to sample your telemetry. The `OTEL_TRACES_SAMPLER_ARG` environment variable can be used to specify
+the sampling rate, with a valid range of 0 to 1, where 0 is 0% and 1 is 100%.
+For example, a value of 0.1 means 10% of your traces are sent.
+
+```
+export OTEL_TRACES_SAMPLER_ARG=0.1
+```
+++
+> [!TIP]
+> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](opentelemetry-enable.md#metrics), which are unaffected by sampling.
+
+## Enable Azure AD authentication
+
+You might want to enable Azure Active Directory (Azure AD) Authentication for a more secure connection to Azure, which prevents unauthorized telemetry from being ingested into your subscription.
+
+#### [.NET](#tab/net)
+
+```csharp
+Currently unavailable.
+```
+
+#### [Java](#tab/java)
+
+For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
+
+#### [Node.js](#tab/nodejs)
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const { ManagedIdentityCredential } = require("@azure/identity");
+
+const credential = new ManagedIdentityCredential();
+
+const config = new ApplicationInsightsConfig();
+config.azureMonitorExporterConfig.aadTokenCredential = credential;
+const appInsights = new ApplicationInsightsClient(config);
+```
+
+#### [Python](#tab/python)
+
+```python
+Currently unavailable.
+```
++++
+## Offline Storage and Automatic Retries
+
+To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In high-load applications, telemetry is occasionally dropped for two reasons. First, when the allowable time is exceeded, and second, when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product saves more recent events over old ones. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage)
+
+### [.NET](#tab/net)
+
+By default, the AzureMonitorExporter uses one of the following locations for offline storage (listed in order of precedence):
+
+- Windows
+ - %LOCALAPPDATA%\Microsoft\AzureMonitor
+ - %TEMP%\Microsoft\AzureMonitor
+- Non-Windows
+ - %TMPDIR%/Microsoft/AzureMonitor
+ - /var/tmp/Microsoft/AzureMonitor
+ - /tmp/Microsoft/AzureMonitor
+
+To override the default directory, you should set `AzureMonitorExporterOptions.StorageDirectory`.
+
+For example:
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAzureMonitorTraceExporter(o => {
+ o.ConnectionString = "<Your Connection String>";
+ o.StorageDirectory = "C:\\SomeDirectory";
+ })
+ .Build();
+```
+
+To disable this feature, you should set `AzureMonitorExporterOptions.DisableOfflineStorage = true`.
+
+### [Java](#tab/java)
+
+Configuring Offline Storage and Automatic Retries isn't available in Java.
+
+For a full list of available configurations, see [Configuration options](./java-standalone-config.md).
+
+### [Node.js](#tab/nodejs)
+
+By default, the AzureMonitorExporter uses one of the following locations for offline storage.
+
+- Windows
+ - %TEMP%\Microsoft\AzureMonitor
+- Non-Windows
+ - %TMPDIR%/Microsoft/AzureMonitor
+ - /var/tmp/Microsoft/AzureMonitor
+
+To override the default directory, you should set `storageDirectory`.
+
+For example:
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const config = new ApplicationInsightsConfig();
+config.azureMonitorExporterConfig = {
+ connectionString: "<Your Connection String>",
+ storageDirectory: "C:\\SomeDirectory",
+ disableOfflineStorage: false
+};
+const appInsights = new ApplicationInsightsClient(config);
+```
+
+To disable this feature, you should set `disableOfflineStorage = true`.
+
+### [Python](#tab/python)
+
+By default, Azure Monitor exporters use the following path:
+
+`<tempfile.gettempdir()>/Microsoft/AzureMonitor/opentelemetry-python-<your-instrumentation-key>`
+
+To override the default directory, you should set `storage_directory` to the directory you want.
+
+For example:
+```python
+...
+configure_azure_monitor(
+ connection_string="your-connection-string",
+ storage_directory="C:\\SomeDirectory",
+)
+...
+
+```
+
+To disable this feature, you should set `disable_offline_storage` to `True`. Defaults to `False`.
+
+For example:
+```python
+...
+configure_azure_monitor(
+ connection_string="your-connection-string",
+ disable_offline_storage=True,
+)
+...
+
+```
+++
+## Enable the OTLP Exporter
+
+You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
+
+> [!NOTE]
+> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
+
+#### [.NET](#tab/net)
+
+1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package along with [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) in your project.
+
+1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs).
+
+ ```csharp
+ // Sends data to Application Insights as well as OTLP
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>"
+ })
+ .AddOtlpExporter()
+ .Build();
+ ```
+
+#### [Java](#tab/java)
+
+For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
+
+#### [Node.js](#tab/nodejs)
+
+1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package in your project.
+
+ ```sh
+ npm install @opentelemetry/exporter-otlp-http
+ ```
+
+2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
+
+ ```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base');
+ const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-http');
+
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const otlpExporter = new OTLPTraceExporter();
+ appInsights.getTraceHandler().getTracerProvider().addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
+ ```
+
+#### [Python](#tab/python)
+
+1. Install the [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) package.
+
+1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see this [README](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry-exporter/samples/traces#collector).
+
+ ```python
+ from azure.monitor.opentelemetry import configure_azure_monitor
+ from opentelemetry import trace
+ from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+ configure_azure_monitor(
+ connection_string="<your-connection-string>",
+ )
+ tracer = trace.get_tracer(__name__)
+
+ otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317")
+ span_processor = BatchSpanProcessor(otlp_exporter)
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("test"):
+ print("Hello world!")
+ ```
+++
+## OpenTelemetry configurations
+
+The following OpenTelemetry configurations can be accessed through environment variables while using the Azure Monitor OpenTelemetry Distros.
+
+### [.NET](#tab/net)
+
+Currently unavailable.
+
+### [Java](#tab/java)
+
+For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
+
+### [Node.js](#tab/nodejs)
+
+For more information about OpenTelemetry SDK configuration, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/sdk-configuration).
+
+### [Python](#tab/python)
+
+Currently unavailable.
++
azure-monitor Opentelemetry Dotnet Exporter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-dotnet-exporter.md
+
+ Title: Enable the Azure Monitor OpenTelemetry exporter for .NET applications
+description: This article provides guidance on how to enable the Azure Monitor OpenTelemetry exporter for .NET applications.
+ Last updated : 05/10/2023
+ms.devlang: csharp
+++
+# Enable Azure Monitor OpenTelemetry for .NET applications
+
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+
+## OpenTelemetry Release Status
+
+The OpenTelemetry exporter for .NET is currently available as a public preview.
+
+[Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+## Get started
+
+Follow the steps in this section to instrument your application with OpenTelemetry.
+
+### Prerequisites
+
+- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)
+- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
+- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
+
+### Install the client libraries
+
+Install the latest [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) NuGet package:
+
+```dotnetcli
+dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter
+```
+
+### Enable Azure Monitor Application Insights
+
+This section provides guidance that shows how to enable OpenTelemetry.
+
+#### Instrument with OpenTelemetry
+
+The following code demonstrates how to enable OpenTelemetry in a C# console application by setting up OpenTelemetry TracerProvider. This code must be in the application startup. For ASP.NET Core, it's done typically in the `ConfigureServices` method of the application `Startup` class. For ASP.NET applications, it's done typically in `Global.asax.cs`.
+
+```csharp
+using System.Diagnostics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Trace;
+
+public class Program
+{
+ private static readonly ActivitySource MyActivitySource = new ActivitySource(
+ "OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ using (var activity = MyActivitySource.StartActivity("TestActivity"))
+ {
+ activity?.SetTag("CustomTag1", "Value1");
+ activity?.SetTag("CustomTag2", "Value2");
+ }
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+#### Set the Application Insights connection string
+
+You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. In the event that both have been set, the programmatic connection string will take precedence.
+
+You can find your connection string in the Overview Pane of your Application Insights Resource.
++
+Here's how you set the connection string.
+
+Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource.
+
+#### Confirm data is flowing
+
+Run your application and open your **Application Insights Resource** tab in the Azure portal. It might take a few minutes for data to show up in the portal.
++
+> [!IMPORTANT]
+> If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
+
+As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
+
+## Set the Cloud Role Name and the Cloud Role Instance
+
+You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
+
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+
+```csharp
+// Setting role name and role instance
+var resourceAttributes = new Dictionary<string, object> {
+ { "service.name", "my-service" },
+ { "service.namespace", "my-namespace" },
+ { "service.instance.id", "my-instance" }};
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+// Done setting role name and role instance
+
+// Set ResourceBuilder on the provider.
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetResourceBuilder(resourceBuilder)
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+```
+
+## Enable Sampling
+
+You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary).
+
+> [!NOTE]
+> Metrics are unaffected by sampling.
+
+The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces will be sent.
+
+In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
+
+```dotnetcli
+dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
+```
+
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .SetSampler(new ApplicationInsightsSampler(0.1F))
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+```
+
+> [!TIP]
+> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
+
+## Instrumentation libraries
+
+The following libraries are validated to work with the current release.
+
+> [!WARNING]
+> Instrumentation libraries are based on experimental OpenTelemetry specifications, which impacts languages in [preview status](#opentelemetry-release-status). Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements.
+
+### Distributed Tracing
+
+Requests
+- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
+ [1.0.0-rc9.8](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.8)
+- [ASP.NET
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
+ [1.0.0-rc9.14](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.14)
+
+Dependencies
+- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
+ [1.0.0-rc9.14](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.14)
+- [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
+ [1.0.0-rc9.14](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient/1.0.0-rc9.14)
+
+### Metrics
+
+- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md) version:
+ [1.0.0-rc9.8](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.8)
+- [ASP.NET
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) version:
+ [1.0.0-rc9.14](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.14)
+- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) version:
+ [1.0.0-rc9.14](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.14)
+- [Runtime](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.Runtime-1.0.0/src/OpenTelemetry.Instrumentation.Runtime/README.md) version: [1.0.0](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime/1.0.0)
+
+> [!TIP]
+> The OpenTelemetry-based offerings currently emit all metrics as [Custom Metrics](#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
+
+### Logs
+
+```csharp
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry.Logs;
+
+public class Program
+{
+ public static void Main()
+ {
+ using var loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry(options =>
+ {
+ options.AddAzureMonitorLogExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ });
+ });
+
+ var logger = loggerFactory.CreateLogger<Program>();
+
+ logger.LogInformation(eventId: 123, "Hello {name}.", "World");
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+**Footnote**
+- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions
+
+## Collect custom telemetry
+
+This section explains how to collect custom telemetry from your application.
+
+Depending on your language and signal type, there are different ways to collect custom telemetry, including:
+
+- OpenTelemetry API
+- Language-specific logging/metrics libraries
+- Application Insights Classic API
+
+The following table represents the currently supported custom telemetry types:
+
+| Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
+|-||-|--|||-|--|
+| **.NET** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;iLogger API | | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;AI Classic API | | | | | | | |
+
+### Add Custom Metrics
+
+> [!NOTE]
+> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+
+You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
+
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+
+The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
+
+| OpenTelemetry Instrument | Azure Monitor Aggregation Type |
+|||
+| Counter | Sum |
+| Asynchronous Counter | Sum |
+| Histogram | Min, Max, Average, Sum and Count |
+| Asynchronous Gauge | Average |
+| UpDownCounter | Sum |
+| Asynchronous UpDownCounter | Sum |
+
+> [!CAUTION]
+> Aggregation types beyond what's shown in the table typically aren't meaningful.
+
+The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument)
+describes the instruments and provides examples of when you might use each one.
+
+> [!TIP]
+> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric Classic API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+
+#### Histogram Example
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+
+ var rand = new Random();
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+#### Counter Example
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+
+ myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
+ myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+#### Gauge Example
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ var process = Process.GetCurrentProcess();
+
+ ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+
+ private static IEnumerable<Measurement<int>> GetThreadState(Process process)
+ {
+ foreach (ProcessThread thread in process.Threads)
+ {
+ yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
+ }
+ }
+}
+```
+
+### Add Custom Exceptions
+
+Select instrumentation libraries automatically report exceptions to Application Insights.
+However, you may want to manually report exceptions beyond what instrumentation libraries report.
+For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them
+to draw attention in relevant experiences including the failures section and end-to-end transaction views.
+
+```csharp
+using (var activity = activitySource.StartActivity("ExceptionExample"))
+{
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+}
+```
+
+### Add Custom Spans
+
+You may want to add a custom span when there's a dependency request that's not already collected by an instrumentation library or an application process that you wish to model as a span on the end-to-end transaction view.
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+```csharp
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("ActivitySourceName")
+ .AddAzureMonitorTraceExporter(o => o.ConnectionString = "<Your Connection String>")
+ .Build();
+
+var activitySource = new ActivitySource("ActivitySourceName");
+
+using (var activity = activitySource.StartActivity("CustomActivity"))
+{
+ // your code here
+}
+```
+
+### Send custom telemetry using the Application Insights Classic API
+
+We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights Classic APIs.
+
+This is not available in .NET.
+
+## Modify telemetry
+
+This section explains how to modify telemetry.
+
+### Add span attributes
+
+These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP.
+
+#### Add a custom property to a Span
+
+Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+
+To add span attributes, use either of the following two ways:
+
+* Use options provided by [instrumentation libraries](#instrumentation-libraries).
+* Add a custom span processor.
+
+> [!TIP]
+> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute.
+
+1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
+
+1. Use a custom processor:
+
+> [!TIP]
+> Add the processor shown here *before* the Azure Monitor Exporter.
+
+```csharp
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddProcessor(new ActivityEnrichingProcessor())
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>"
+ })
+ .Build();
+```
+
+Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+
+```csharp
+using System.Diagnostics;
+using OpenTelemetry;
+using OpenTelemetry.Trace;
+
+public class ActivityEnrichingProcessor : BaseProcessor<Activity>
+{
+ public override void OnEnd(Activity activity)
+ {
+ // The updated activity will be available to all processors which are called after this processor.
+ activity.DisplayName = "Updated-" + activity.DisplayName;
+ activity.SetTag("CustomDimension1", "Value1");
+ activity.SetTag("CustomDimension2", "Value2");
+ }
+}
+```
+
+#### Set the user IP
+
+You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
+
+```C#
+// only applicable in case of activity.Kind == Server
+activity.SetTag("http.client_ip", "<IP Address>");
+```
+
+#### Set the user ID or authenticated user ID
+
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+
+> [!IMPORTANT]
+> Consult applicable privacy laws before you set the Authenticated User ID.
+
+Use the add [custom property example](#add-a-custom-property-to-a-span).
+
+```csharp
+activity?.SetTag("enduser.id", "<User Id>");
+```
+
+### Add Log Attributes
+
+OpenTelemetry uses .NET's ILogger.
+Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
+
+### Filter telemetry
+
+You might use the following ways to filter out telemetry before it leaves your application.
+
+1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+
+1. Use a custom processor:
+
+ ```csharp
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddProcessor(new ActivityFilteringProcessor())
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>"
+ })
+ .Build();
+ ```
+
+ Add `ActivityFilteringProcessor.cs` to your project with the following code:
+
+ ```csharp
+ using System.Diagnostics;
+ using OpenTelemetry;
+ using OpenTelemetry.Trace;
+
+ public class ActivityFilteringProcessor : BaseProcessor<Activity>
+ {
+ public override void OnStart(Activity activity)
+ {
+ // prevents all exporters from exporting internal activities
+ if (activity.Kind == ActivityKind.Internal)
+ {
+ activity.IsAllDataRequested = false;
+ }
+ }
+ }
+ ```
+
+1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported.
+
+### Get the trace ID or span ID
+
+You might want to get the trace ID or span ID. If you have logs that are sent to a different destination besides Application Insights, you might want to add the trace ID or span ID to enable better correlation when you debug and diagnose issues.
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+```csharp
+Activity activity = Activity.Current;
+string traceId = activity?.TraceId.ToHexString();
+string spanId = activity?.SpanId.ToHexString();
+```
+
+## Enable the OTLP Exporter
+
+You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
+
+> [!NOTE]
+> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
+
+1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package along with [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) in your project.
+
+1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs).
+
+ ```csharp
+ // Sends data to Application Insights as well as OTLP
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>"
+ })
+ .AddOtlpExporter()
+ .Build();
+ ```
+
+## Support
+
+- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
+
+## OpenTelemetry feedback
+
+To provide feedback:
+
+- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).
+- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).
+- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).
+- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
+
+## Next steps
+
+- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).
+- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter/) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 04/21/2023 Last updated : 05/10/2023 ms.devlang: csharp, javascript, typescript, python # Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications
-This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We will walk through how to install the "Azure Monitor OpenTelemetry Distro." To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
## OpenTelemetry Release Status
OpenTelemetry offerings are available for .NET, Node.js, Python and Java applica
- <a name="PREVIEW"> :warning: 2</a>: OpenTelemetry is available as a public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) > [!NOTE]
-> For a feature-by-feature release status, see the [FAQ](../faq.yml#what-s-the-current-release-state-of-features-within-each-opentelemetry-offering-).
+> For a feature-by-feature release status, see the [FAQ](../faq.yml#what-s-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro-).
## Get started
Follow the steps in this section to instrument your application with OpenTelemet
<!NOTE TO CONTRIBUTORS: PLEASE DO NOT SEPARATE OUT JAVASCRIPT AND TYPESCRIPT INTO DIFFERENT TABS.>
-### [.NET](#tab/net)
+### [ASP.NET Core](#tab/aspnetcore)
- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
Follow the steps in this section to instrument your application with OpenTelemet
- Python Application using Python 3.7+
+> [!CAUTION]
+> We have not tested the Azure Monitor OpenTelemetry Distro running side-by-side with the OpenTelemetry Community Package. We recommend you uninstall any OpenTelemetry-related packages before installing the Distro.
-### Install the client libraries
-
-#### [.NET](#tab/net)
-
-Install the latest [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) NuGet package:
+### Install the client library
-```dotnetcli
-dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter
-```
+#### [ASP.NET Core](#tab/aspnetcore)
-If you get an error like "There are no versions available for the package Azure.Monitor.OpenTelemetry.Exporter," it's probably because the setting of NuGet package sources is missing. Try to specify the source with the `-s` option:
+Install the latest [Azure.Monitor.OpenTelemetry.AspNetCore](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) NuGet package:
```dotnetcli
-# Install the latest package with the NuGet package source specified.
-dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https://api.nuget.org/v3/index.json
+dotnet add package --prerelease Azure.Monitor.OpenTelemetry.AspNetCore
``` #### [Java](#tab/java)
Download the [applicationinsights-agent-3.4.12.jar](https://github.com/microsoft
> [!WARNING] >
-> If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. See the migration notes at the top of the release notes for
+> If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. For more information, see the migration section in the release notes.
> [3.4.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.4.0), > [3.3.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.3.0), > [3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0), and > [3.1.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0)
-> for more details.
#### [Node.js](#tab/nodejs) Install these packages: -- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base)-- [@opentelemetry/sdk-trace-node](https://www.npmjs.com/package/@opentelemetry/sdk-trace-node)-- [@azure/monitor-opentelemetry-exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter)-- [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api)
+- [applicationinsights](https://www.npmjs.com/package/applicationinsights/v/beta)
```sh
-npm install @opentelemetry/sdk-trace-base
-npm install @opentelemetry/sdk-trace-node
-npm install @azure/monitor-opentelemetry-exporter
-npm install @opentelemetry/api
+npm install applicationinsights@beta
``` The following packages are also used for some specific scenarios described later in this article:
+- [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api)
- [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics) - [@opentelemetry/resources](https://www.npmjs.com/package/@opentelemetry/resources) - [@opentelemetry/semantic-conventions](https://www.npmjs.com/package/@opentelemetry/semantic-conventions)-- [@opentelemetry/instrumentation-http](https://www.npmjs.com/package/@opentelemetry/instrumentation-http)
+- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base)
```sh
+npm install @opentelemetry/api
npm install @opentelemetry/sdk-metrics npm install @opentelemetry/resources npm install @opentelemetry/semantic-conventions
-npm install @opentelemetry/instrumentation-http
+npm install @opentelemetry/sdk-trace-base
``` #### [Python](#tab/python)
pip install azure-monitor-opentelemetry --pre
### Enable Azure Monitor Application Insights
+To enable Azure Monitor Application Insights, you will make a minor modification to your application and set your "Connection String". The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you.
-This section provides guidance that shows how to enable OpenTelemetry.
+#### Modify your Application
-#### Instrument with OpenTelemetry
+##### [ASP.NET Core](#tab/aspnetcore)
+Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET Core, this will be in either your `startup.cs` or `program.cs` class.
-##### [.NET](#tab/net)
+```csharp
+using Azure.Monitor.OpenTelemetry.AspNetCore;
+using Microsoft.AspNetCore.Builder;
+using Microsoft.Extensions.DependencyInjection;
-The following code demonstrates how to enable OpenTelemetry in a C# console application by setting up OpenTelemetry TracerProvider. This code must be in the application startup. For ASP.NET Core, it's done typically in the `ConfigureServices` method of the application `Startup` class. For ASP.NET applications, it's done typically in `Global.asax.cs`.
+var builder = WebApplication.CreateBuilder(args);
-```csharp
-using System.Diagnostics;
-using Azure.Monitor.OpenTelemetry.Exporter;
-using OpenTelemetry;
-using OpenTelemetry.Trace;
+builder.Services.AddOpenTelemetry().UseAzureMonitor(
-public class Program
-{
- private static readonly ActivitySource MyActivitySource = new ActivitySource(
- "OTel.AzureMonitor.Demo");
+//Uncomment the line below when setting the Application Insights Connection String via code
+//options => options.ConnectionString = "<Your Connection String>"
- public static void Main()
- {
- using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
+);
- using (var activity = MyActivitySource.StartActivity("TestActivity"))
- {
- activity?.SetTag("CustomTag1", "Value1");
- activity?.SetTag("CustomTag2", "Value2");
- }
+var app = builder.Build();
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-}
+app.Run();
```
-> [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
- ##### [Java](#tab/java)
-Java auto-instrumentation is enabled through configuration changes; no code changes are required.
+Java autoinstrumentation is enabled through configuration changes; no code changes are required.
Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` to your application's JVM args.
Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights
##### [Node.js](#tab/nodejs)
-The following code demonstrates how to enable OpenTelemetry in a simple JavaScript application:
- ```javascript
-const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
-const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
-const { context, trace } = require("@opentelemetry/api")
-
-const provider = new NodeTracerProvider();
-provider.register();
-
-// Create an exporter instance.
-const exporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>"
-});
-
-// Add the exporter to the provider.
-provider.addSpanProcessor(
- new BatchSpanProcessor(exporter)
-);
-
-// Create a tracer.
-const tracer = trace.getTracer("example-basic-tracer-node");
-
-// Create a span. A span must be closed.
-const parentSpan = tracer.startSpan("main");
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const config = new ApplicationInsightsConfig();
-for (let i = 0; i < 10; i += 1) {
- doWork(parentSpan);
-}
-// Be sure to end the span.
-parentSpan.end();
-
-function doWork(parent) {
- // Start another span. In this example, the main method already started a
- // span, so that will be the parent span, and this will be a child span.
- const ctx = trace.setSpan(context.active(), parent);
-
- // Set attributes to the span.
- // Check the SpanOptions interface for more options that can be set into the span creation
- const spanOptions = {
- attributes: {
- "key": "value"
- }
- };
-
- const span = tracer.startSpan("doWork", spanOptions, ctx);
-
- // Simulate some random work.
- for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) {
- // empty
- }
-
- // Annotate our span to capture metadata about our operation.
- span.addEvent("invoking doWork");
-
- // Mark the end of span execution.
- span.end();
-}
+//Uncomment the line below when setting the Application Insights Connection String via code
+//config.azureMonitorExporterConfig.connectionString = "<Your Connection String>";
+const appInsights = new ApplicationInsightsClient(config);
``` ##### [Python](#tab/python)
-The following code demonstrates how to enable OpenTelemetry in a simple Python application:
- ```python from azure.monitor.opentelemetry import configure_azure_monitor from opentelemetry import trace
input()
+#### Copy the Connection String from your Application Insights Resource
> [!TIP]
-> For .NET, Node.js, and Python, you'll need to manually add [instrumentation libraries](#instrumentation-libraries) to autocollect telemetry across popular frameworks and libraries. For Java, these instrumentation libraries are already included and no additional steps are required.
+> If you don't already have one, now is a great time to [Create an Application Insights Resource](create-workspace-resource.md#create-a-workspace-based-resource).
-#### Set the Application Insights connection string
+To copy your unique Connection String:
-You can find your connection string in the Overview Pane of your Application Insights Resource.
+1. Go to the **Overview** pane of your Application Insights resource.
+2. Find your **Connection String**.
+3. Hover over the connection string and select the **Copy to clipboard** icon.
-Here's how you set the connection string.
+#### Paste the Connection String in your environment
-#### [.NET](#tab/net)
+To paste your Connection String, select from the options below:
-Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource.
-
-#### [Java](#tab/java)
-
-Use one of the following two ways to point the jar file to your Application Insights resource:
+ A. Set via Environment Variable (Recommended)
+
+ Replace `<Your Connection String>` in the following command with *your* unique connection string.
-- Set an environment variable:
-
```console APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> ```
-
-- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.12.jar` with the following content:+
+ B. Set via Configuration File - Java Only (Recommended)
+
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.12.jar` with the following content:
```json { "connectionString": "<Your Connection String>" } ```
+ Replace `<Your Connection String>` in the preceding JSON with *your* unique connection string.
-#### [Node.js](#tab/nodejs)
-
-Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource.
-
-#### [Python](#tab/python)
-
-Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource.
+ C. Set via Code - ASP.NET Core, Node.js, and Python Only (Not recommended)
+
+ Uncomment the code line with `<Your Connection String>`, and replace the placeholder with *your* unique connection string.
-
+ > [!NOTE]
+ > If you set the connection string in more than one place, we adhere to the following precendence:
+ > 1. Code
+ > 2. Environment Variable
+ > 3. Configuration File
#### Confirm data is flowing Run your application and open your **Application Insights Resource** tab in the Azure portal. It might take a few minutes for data to show up in the portal.
-> [!NOTE]
-> If you can't run the application or you aren't getting data as expected, see [Troubleshooting](#troubleshooting).
- :::image type="content" source="media/opentelemetry/server-requests.png" alt-text="Screenshot of the Application Insights Overview tab with server requests and server response time highlighted.":::
-> [!IMPORTANT]
-> If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
-
-As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
-
-## Set the Cloud Role Name and the Cloud Role Instance
-
-You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
-
-### [.NET](#tab/net)
+That's it. Your application is now being monitored by Application Insights. Everything else below is optional and available for further customization.
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-dotnet), [Java](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-java), [Node.js](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-nodejs), or [Python](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-python).
-```csharp
-// Setting role name and role instance
-var resourceAttributes = new Dictionary<string, object> {
- { "service.name", "my-service" },
- { "service.namespace", "my-namespace" },
- { "service.instance.id", "my-instance" }};
-var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
-// Done setting role name and role instance
-
-// Set ResourceBuilder on the provider.
-var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .SetResourceBuilder(resourceBuilder)
- .AddSource("OTel.AzureMonitor.Demo")
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
-```
-
-### [Java](#tab/java)
-
-To set the cloud role name, see [cloud role name](java-standalone-config.md#cloud-role-name).
-
-To set the cloud role instance, see [cloud role instance](java-standalone-config.md#cloud-role-instance).
-
-### [Node.js](#tab/nodejs)
-
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
-
-```javascript
-...
-const { Resource } = require("@opentelemetry/resources");
-const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");
-const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
-const { MeterProvider } = require("@opentelemetry/sdk-metrics")
-
-// -
-// Setting role name and role instance
-// -
-const testResource = new Resource({
- [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
- [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
- [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
-});
-
-// -
-// Done setting role name and role instance
-// -
-const tracerProvider = new NodeTracerProvider({
- resource: testResource
-});
-
-const meterProvider = new MeterProvider({
- resource: testResource
-});
-```
-
-### [Python](#tab/python)
-
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
-
-```python
-...
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry.sdk.resources import Resource, ResourceAttributes
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
- resource=Resource.create(
- {
- ResourceAttributes.SERVICE_NAME: "my-helloworld-service",
-# -
-# Setting role name and role instance
-# -
- ResourceAttributes.SERVICE_NAMESPACE: "my-namespace",
- ResourceAttributes.SERVICE_INSTANCE_ID: "my-instance",
-# -
-# Done setting role name and role instance
-# -
- }
- )
-)
-...
-```
---
-## Enable Sampling
-
-You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary).
-
-> [!NOTE]
-> Metrics are unaffected by sampling.
-
-#### [.NET](#tab/net)
-
-The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces will be sent.
-
-In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
-
-```dotnetcli
-dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
-```
-
-```csharp
-var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .SetSampler(new ApplicationInsightsSampler(0.1F))
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
-```
-
-#### [Java](#tab/java)
-
-Starting from 3.4.0, rate-limited sampling is available and is now the default. See [sampling]( java-standalone-config.md#sampling) for more information.
-
-#### [Node.js](#tab/nodejs)
-
-```javascript
-const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { ApplicationInsightsSampler, AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
-
-// Sampler expects a sample rate of between 0 and 1 inclusive
-// A rate of 0.1 means approximately 10% of your traces are sent
-const aiSampler = new ApplicationInsightsSampler(0.75);
-
-const provider = new BasicTracerProvider({
- sampler: aiSampler
-});
-
-const exporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>"
-});
-
-provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
-provider.register();
-```
-
-#### [Python](#tab/python)
-
-The `configure_azure_monitor()` function will automatically utilize
-ApplicationInsightsSampler for compatibility with Application Insights SDKs and
-to sample your telemetry. The `sampling_ratio` parameter can be used to specify
-the sampling rate, with a valid range of 0 to 1, where 0 is 0% and 1 is 100%.
-For example, a value of 0.1 means 10% of your traces will be sent.
-
-```python
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import trace
-
-configure_azure_monitor(
- # connection_string="<your-connection-string>",
- # Sampling ratio of between 0 and 1 inclusive
- # 0.1 means approximately 10% of your traces are sent
- sampling_ratio=0.1,
-)
-
-tracer = trace.get_tracer(__name__)
-
-for i in range(100):
- # Approximately 90% of these spans should be sampled out
- with tracer.start_as_current_span("hello"):
- print("Hello, World!")
-```
+> [!IMPORTANT]
+> If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](opentelemetry-configuration.md#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
-
+As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
-> [!TIP]
-> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
-## Instrumentation libraries
+## Automatic data collection
-The following libraries are validated to work with the current release.
+The distros automatically collect data by bundling in OpenTelemetry "instrumentation libraries".
-> [!WARNING]
-> Instrumentation libraries are based on experimental OpenTelemetry specifications, which impacts languages in [preview status](#opentelemetry-release-status). Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements.
+### Included instrumentation libraries
-### Distributed Tracing
-
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
Requests-- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
- [1.0.0-rc9.6](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.6)
- [ASP.NET
- Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
- [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.7)
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
Dependencies-- [HTTP
- clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
- [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7)
-- [SQL
- client](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup> version:
- [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient/1.0.0-rc9.7)
-
-#### [Java](#tab/java)
+- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup>
-Java 3.x includes the following auto-instrumentation.
+Logging
+- ILogger
+
+For more information about ILogger, see [Logging in C# and .NET](/dotnet/core/extensions/logging) and [code examples](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs).
-Autocollected requests:
+#### [Java](#tab/java)
+Requests
* JMS consumers * Kafka consumers * Netty
Autocollected requests:
* Spring scheduling > [!NOTE]
-> Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
-
-Autocollected dependencies (plus downstream distributed trace propagation):
+> Servlet and Netty autoinstrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
+Dependencies (plus downstream distributed trace propagation):
* Apache HttpClient * Apache HttpAsyncClient * AsyncHttpClient
Autocollected dependencies (plus downstream distributed trace propagation):
* OkHttp * RabbitMQ
-Autocollected dependencies (without downstream distributed trace propagation):
-
+Dependencies (without downstream distributed trace propagation):
* Cassandra * JDBC * MongoDB (async and sync) * Redis (Lettuce and Jedis)
+Metrics
+
+* Micrometer Metrics, including Spring Boot Actuator metrics
+* JMX Metrics
+
+Logs
+* Logback (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* Log4j (including MDC/Thread Context properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* JBoss Logging (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* java.util.logging [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+ Telemetry emitted by these Azure SDKs is automatically collected by default: * [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+
Telemetry emitted by these Azure SDKs is automatically collected by default:
#### [Node.js](#tab/nodejs)
-Requests/Dependencies
-- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
- [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
-
-Dependencies
-- [mysql](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql) version:
- [0.25.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-mysql/v/0.25.0)
-
-#### [Python](#tab/python)
+The following OpenTelemetry Instrumentation libraries are included as part of Azure Monitor Application Insights Distro.
Requests-- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-django/0.36b0/)
-- [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-fastapi/0.36b0/)
-- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-flask/0.36b0/)
+- [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup>
Dependencies-- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2) version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-psycopg2/0.36b0/)
-- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-requests/0.36b0/)
-- [Urllib](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-urllib/0.36b0/)
-- [Urllib3](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) <sup>[1](#FOOTNOTEONE)</sup> version:
- [0.36b0](https://pypi.org/project/opentelemetry-instrumentation-urllib3/0.36b0/)
+- [MongoDB](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mongodb)
+- [MySQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql)
+- [Postgres](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-pg)
+- [Redis](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis)
+- [Redis-4](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis-4)
+- [Azure SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/instrumentation/opentelemetry-instrumentation-azure-sdk)
-
+Logs
+- [Node.js console](https://nodejs.org/api/console.html)
+- [Bunyan](https://github.com/trentm/node-bunyan#readme)
+- [Winston](https://github.com/winstonjs/winston#readme)
-### Metrics
-#### [.NET](#tab/net)
+#### [Python](#tab/python)
-- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) version:
- [1.0.0-rc9.6](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.6)
-- [ASP.NET
- Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) version:
- [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.7)
-- [HTTP
- clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) version:
- [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7)
-- [Runtime](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.Runtime-1.0.0/src/OpenTelemetry.Instrumentation.Runtime/README.md) version: [1.0.0](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime/1.0.0)
+Requests
+- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
-#### [Java](#tab/java)
+Dependencies
+- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2)
+- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Urllib](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Urllib3](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
-Autocollected metrics
+Logs
+- [Python logging library](https://docs.python.org/3/howto/logging.html) <sup>[4](#FOOTNOTEFOUR)</sup>
-* Micrometer Metrics, including Spring Boot Actuator metrics
-* JMX Metrics
+Examples of using the Python logging library can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples/logging).
-#### [Node.js](#tab/nodejs)
+ -- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
- [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
+**Footnotes**
+- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions
+- <a name="FOOTNOTETWO">2</a>: Supports OpenTelemetry Metrics
+- <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).
+- <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected at WARNING level or higher..
-#### [Python](#tab/python)
+> [!NOTE]
+> The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md).
-Autocollected metrics
+> [!TIP]
+> The OpenTelemetry-based offerings currently emit all OpenTelemetry metrics as [Custom Metrics](opentelemetry-enable.md#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
-- [Django](https://pypi.org/project/Django/)-- [FastApi](https://pypi.org/project/requests/)-- [Flask](https://pypi.org/project/Flask/)-- [Requests](https://pypi.org/project/requests/)-- [Urllib](https://docs.python.org/3/library/urllib.html)-- [Urllib3](https://pypi.org/project/urllib3/)
+### Add a community instrumentation library
-
+You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community.
-> [!TIP]
-> The OpenTelemetry-based offerings currently emit all metrics as [Custom Metrics](#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
+> [!NOTE]
+> We don't support and cannot guarantee the quality of community instrumentation libraries. If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
-### Logs
+> [!CAUTION]
+> Some instrumentation libraries are based on experimental OpenTelemetry semantic specifications. Adding them may leave you vulnerable to future breaking changes.
-#### [.NET](#tab/net)
+### [ASP.NET Core](#tab/aspnetcore)
-Coming soon.
+To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
-#### [Java](#tab/java)
+The following example demonstrates how the the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics.
-Autocollected logs
+```csharp
+var builder = WebApplication.CreateBuilder(args);
-* Logback <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC properties)
-* Log4j <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC/Thread Context properties)
-* JBoss Logging <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup> (including MDC properties)
-* java.util.logging <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddRuntimeInstrumentation());
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
-#### [Node.js](#tab/nodejs)
+var app = builder.Build();
-Coming soon.
+app.Run();
+```
-#### [Python](#tab/python)
+### [Java](#tab/java)
+You cannot extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, please open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
-Autocollected logs
+### [Node.js](#tab/nodejs)
-* [Python logging library](https://docs.python.org/3/howto/logging.html) <sup>[3](#FOOTNOTETHREE)</sup>
+Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient.
-See [this](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples/logging) for examples of using the Python logging library.
+ ```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
-
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const traceHandler = appInsights.getTraceHandler();
+ traceHandler.addInstrumentation(new ExpressInstrumentation());
+```
-**Footnotes**
-- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions-- <a name="FOOTNOTETWO">2</a>: By default, logging is only collected when that logging is performed at the INFO level or higher. To change this level, see the [configuration options](./java-standalone-config.md#autocollected-logging).-- <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected when that logging is performed at the WARNING level or higher. To change this level, see the [configuration options](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#usage) and specify `logging_level`.
+### [Python](#tab/python)
+Currently unavailable.
++ ## Collect custom telemetry
Depending on your language and signal type, there are different ways to collect
- OpenTelemetry API - Language-specific logging/metrics libraries-- Application Insights Classic API
+- Application Insights [Classic API](api-custom-events-metrics.md)
The following table represents the currently supported custom telemetry types:
-| Custom Telemetry Types | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
+| Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
|-||-|--|||-|--|
-| **.NET** | | | | | | | |
+| **ASP.NET Core** | | | | | | | |
| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | | Yes | Yes | | Yes | | | &nbsp;&nbsp;&nbsp;iLogger API | | | | | | | Yes | | &nbsp;&nbsp;&nbsp;AI Classic API | | | | | | | |
The following table represents the currently supported custom telemetry types:
| | | | | | | | | | **Node.js** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;Winston, Pino, Bunyan | | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;Console, Winston, Bunyan| | | | | | | Yes |
| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | | | | | | | | | | **Python** | | | | | | | |
The following table represents the currently supported custom telemetry types:
| &nbsp;&nbsp;&nbsp;Python Logging Module | | | | | | | Yes | > [!NOTE]
-> Application Insights Java 3.x listens for telemetry that's sent to the Application Insights Classic API. Similarly, Application Insights Node.js 3.x collects events created with the Application Insights Classic API. This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
+> Application Insights Java 3.x listens for telemetry that's sent to the Application Insights [Classic API](api-custom-events-metrics.md). Similarly, Application Insights Node.js 3.x collects events created with the Application Insights [Classic API](api-custom-events-metrics.md). This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
### Add Custom Metrics > [!NOTE] > Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
+Consider collecting more metrics beyond what's provided by the instrumentation libraries.
-The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetr
describes the instruments and provides examples of when you might use each one. > [!TIP]
-> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric Classic API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric [Classic API](api-custom-events-metrics.md). Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
#### Histogram Example
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
```csharp
-using System.Diagnostics.Metrics;
-using Azure.Monitor.OpenTelemetry.Exporter;
-using OpenTelemetry;
-using OpenTelemetry.Metrics;
+var builder = WebApplication.CreateBuilder(args);
-public class Program
-{
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
- public static void Main()
- {
- using var meterProvider = Sdk.CreateMeterProviderBuilder()
- .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
+var app = builder.Build();
- Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+app.Run();
+```
- var rand = new Random();
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+The `Meter` must be initialized using that same name.
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-}
+```csharp
+var meter = new Meter("OTel.AzureMonitor.Demo");
+Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+
+var rand = new Random();
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
``` #### [Java](#tab/java)
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const {
- MeterProvider,
- PeriodicExportingMetricReader,
- } = require("@opentelemetry/sdk-metrics");
- const {
- AzureMonitorMetricExporter,
- } = require("@azure/monitor-opentelemetry-exporter");
-
- const provider = new MeterProvider();
- const exporter = new AzureMonitorMetricExporter({
- connectionString: "<Your Connection String>",
- });
-
- const metricReader = new PeriodicExportingMetricReader({
- exporter: exporter,
- });
-
- provider.addMetricReader(metricReader);
-
- const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
let histogram = meter.createHistogram("histogram");-
- histogram.record(1, { testKey: "testValue" });
- histogram.record(30, { testKey: "testValue2" });
- histogram.record(100, { testKey2: "testValue" });
+ histogram.record(1, { "testKey": "testValue" });
+ histogram.record(30, { "testKey": "testValue2" });
+ histogram.record(100, { "testKey2": "testValue" });
``` #### [Python](#tab/python)
input()
#### Counter Example
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
```csharp
-using System.Diagnostics.Metrics;
-using Azure.Monitor.OpenTelemetry.Exporter;
-using OpenTelemetry;
-using OpenTelemetry.Metrics;
+var builder = WebApplication.CreateBuilder(args);
-public class Program
-{
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
- public static void Main()
- {
- using var meterProvider = Sdk.CreateMeterProviderBuilder()
- .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
+var app = builder.Build();
- Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+app.Run();
+```
- myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
- myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
- myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
- myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
- myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
- myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
+The `Meter` must be initialized using that same name.
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-}
+```csharp
+var meter = new Meter("OTel.AzureMonitor.Demo");
+Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+
+myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
+myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
+myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
+myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
+myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
+myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
``` #### [Java](#tab/java)
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const {
- MeterProvider,
- PeriodicExportingMetricReader,
- } = require("@opentelemetry/sdk-metrics");
- const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter");
-
- const provider = new MeterProvider();
- const exporter = new AzureMonitorMetricExporter({
- connectionString: "<Your Connection String>",
- });
- const metricReader = new PeriodicExportingMetricReader({
- exporter: exporter,
- });
- provider.addMetricReader(metricReader);
- const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
let counter = meter.createCounter("counter"); counter.add(1, { "testKey": "testValue" }); counter.add(5, { "testKey2": "testValue" });
input()
#### Gauge Example
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+The `Meter` must be initialized using that same name.
+
+```csharp
+var process = Process.GetCurrentProcess();
+
+var meter = new Meter("OTel.AzureMonitor.Demo");
+ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
+
+private static IEnumerable<Measurement<int>> GetThreadState(Process process)
+{
+ foreach (ProcessThread thread in process.Threads)
+ {
+ yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
+ }
+}
+```
+ ```csharp using System.Diagnostics.Metrics;
using OpenTelemetry.Metrics;
public class Program {
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+ internal static readonly Meter meter = new("OTel.AzureMonitor.Demo");
public static void Main() {
public class Program {
} ```
-#### [Node.js](#tab/nodejs)
-
-```javascript
- const {
- MeterProvider,
- PeriodicExportingMetricReader
- } = require("@opentelemetry/sdk-metrics");
- const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter");
-
- const provider = new MeterProvider();
- const exporter = new AzureMonitorMetricExporter({
- connectionString:
- connectionString: "<Your Connection String>",
- });
- const metricReader = new PeriodicExportingMetricReader({
- exporter: exporter
- });
- provider.addMetricReader(metricReader);
- const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+#### [Node.js](#tab/nodejs)
+
+```typescript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
let gauge = meter.createObservableGauge("gauge");
- gauge.addCallback((observableResult) => {
+ gauge.addCallback((observableResult: ObservableResult) => {
let randomNumber = Math.floor(Math.random() * 100); observableResult.observe(randomNumber, {"testKey": "testValue"}); });
However, you may want to manually report exceptions beyond what instrumentation
For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them to draw attention in relevant experiences including the failures section and end-to-end transaction views.
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
```csharp using (var activity = activitySource.StartActivity("ExceptionExample"))
You can use `opentelemetry-api` to update the status of a span and record except
#### [Node.js](#tab/nodejs) ```javascript
-const { trace } = require("@opentelemetry/api");
-const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const provider = new BasicTracerProvider();
-const exporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>",
-});
-provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
-provider.register();
-const tracer = trace.getTracer("example-basic-tracer-node");
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const tracer = appInsights.getTraceHandler().getTracer();
let span = tracer.startSpan("hello"); try{ throw new Error("Test Error");
catch(error){
#### [Python](#tab/python)
-The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown will automatically be captured and recorded. See below for an example of this behavior.
+The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior.
```python from azure.monitor.opentelemetry import configure_azure_monitor
except Exception:
``` If you would like to record exceptions manually, you can disable that option
-within the context manager and use `record_exception()` directly as shown below:
+within the context manager and use `record_exception()` directly as shown in the following example:
```python ...
with tracer.start_as_current_span("hello", record_exception=False) as span:
### Add Custom Spans
-You may want to add a custom span when there's a dependency request that's not already collected by an instrumentation library or an application process that you wish to model as a span on the end-to-end transaction view.
-
-#### [.NET](#tab/net)
-
-Coming soon.
+You may want to add a custom span in two scenarios. First, when there's a dependency request not already collected by an instrumentation library. Second, when you wish to model an application process as a span on the end-to-end transaction view.
+#### [ASP.NET Core](#tab/aspnetcore)
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
++
+```csharp
+internal static readonly ActivitySource activitySource = new("ActivitySourceName");
+
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.MapGet("/", () =>
+{
+ using (var activity = activitySource.StartActivity("CustomActivity"))
+ {
+ // your code here
+ }
+
+ return $"Hello World!";
+});
+
+app.Run();
+```
+
+By default, the activity ends up in the Application Insights `dependencies` table with dependency type `InProc`.
+
+For code representing a background job not captured by an instrumentation library, we recommend setting `ActivityKind.Server` in the `StartActivity` method to ensure it appears in the Application Insights `requests` table.
+ #### [Java](#tab/java)
-#### Use the OpenTelemetry annotation
+##### Use the OpenTelemetry annotation
The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
Spans populate the `requests` and `dependencies` tables in Application Insights.
} ```
-By default, the span will end up in the `dependencies` table with dependency type `InProc`.
+By default, the span ends up in the `dependencies` table with dependency type `InProc`.
-If your method represents a background job that isn't already captured by auto-instrumentation,
-we recommend that you apply the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation
-so that it will end up in the Application Insights `requests` table.
+For methods representing a background job not captured by autoinstrumentation, we recommend applying the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation to ensure they appear in the Application Insights `requests` table.
-#### Use the OpenTelemetry API
+##### Use the OpenTelemetry API
If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs, you can add your spans by using the OpenTelemetry API.
you can add your spans by using the OpenTelemetry API.
#### [Node.js](#tab/nodejs)
-Coming soon.
-
-#### [Python](#tab/python)
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const tracer = appInsights.getTraceHandler().getTracer();
+let span = tracer.startSpan("hello");
+span.end();
+```
-#### Use the OpenTelemetry API
-The OpenTelemetry API can be used to add your own spans, which will appear in
-the `requests` and `dependencies` tables in Application Insights.
+#### [Python](#tab/python)
+
+The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights.
-The code example shows how to use the `tracer.start_as_current_span()` method to
-start, make the span current, and end the span within its context.
+The code example shows how to use the `tracer.start_as_current_span()` method to start, make the span current, and end the span within its context.
```python ...
with tracer.start_as_current_span("my first span") as span:
```
-By default, the span will be in the `dependencies` table with a dependency type of `InProc`.
+By default, the span is in the `dependencies` table with a dependency type of `InProc`.
-If your method represents a background job that isn't already captured by
-auto-instrumentation, we recommend that you set the attribute `kind =
-SpanKind.SERVER` so that it will end up in the Application Insights `requests`
-table.
+If your method represents a background job not already captured by autoinstrumentation, we recommend setting the attribute `kind = SpanKind.SERVER` to ensure it appears in the Application Insights `requests` table.
```python ...
The OpenTelemetry Logs/Events API is still under development. In the meantime, y
> [!CAUTION] > Span Events are only recommended for when you need additional diagnostic metadata associated with your span. For other scenarios, such as describing business events, we recommend you wait for the release of the OpenTelemetry Events API.
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
-Coming soon.
+Currently unavailable.
#### [Java](#tab/java)
You can use `opentelemetry-api` to create span events, which populate the `trace
#### [Node.js](#tab/nodejs)
-Coming soon.
+Currently unavailable.
#### [Python](#tab/python)
-Coming soon.
+Currently unavailable.
--> ### Send custom telemetry using the Application Insights Classic API+
+We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md)s.
-We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights Classic APIs.
-
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
-This is not available in .NET.
+It isn't available in .NET.
#### [Java](#tab/java)
This is not available in .NET.
``` #### [Node.js](#tab/nodejs)
-
-Coming soon.
-
++
+1. Get `LogHandler`:
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const logHandler = appInsights.getLogHandler();
+```
+
+1. Use the `LogHandler` to send custom telemetry:
+
+ ##### Events
+
+ ```javascript
+ let eventTelemetry = {
+ name: "testEvent"
+ };
+ logHandler.trackEvent(eventTelemetry);
+ ```
+
+ ##### Logs
+
+ ```javascript
+ let traceTelemetry = {
+ message: "testMessage",
+ severity: "Information"
+ };
+ logHandler.trackTrace(traceTelemetry);
+ ```
+
+ ##### Exceptions
+
+ ```javascript
+ try {
+ ...
+ } catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ logHandler.trackException(exceptionTelemetry);
+ }
+ ```
+ #### [Python](#tab/python)
-This is not available in Python.
+It isn't available in Python.
These attributes might include adding a custom property to your telemetry. You m
Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
-##### [.NET](#tab/net)
+##### [ASP.NET Core](#tab/aspnetcore)
To add span attributes, use either of the following two ways:
-* Use options provided by [instrumentation libraries](#instrumentation-libraries).
+* Use options provided by [instrumentation libraries](#install-the-client-library).
* Add a custom span processor. > [!TIP] > The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. 1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
1. Use a custom processor: > [!TIP]
-> Add the processor shown here *before* the Azure Monitor Exporter.
+> Add the processor shown here *before* adding Azure Monitor.
```csharp
-using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddProcessor(new ActivityEnrichingProcessor())
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>"
- })
- .Build();
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
``` Add `ActivityEnrichingProcessor.cs` to your project with the following code:
Adding one or more span attributes populates the `customDimensions` field in the
##### [Node.js](#tab/nodejs)
-Use a custom processor:
-
-> [!TIP]
-> Add the processor shown here *before* the Azure Monitor Exporter.
+```typescript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-```javascript
-const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
-const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
-const { SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-class SpanEnrichingProcessor {
- forceFlush() {
+class SpanEnrichingProcessor implements SpanProcessor{
+ forceFlush(): Promise<void>{
return Promise.resolve(); }
- shutdown() {
+ shutdown(): Promise<void>{
return Promise.resolve(); }
- onStart(_span){}
- onEnd(span){
+ onStart(_span: Span): void{}
+ onEnd(span: ReadableSpan){
span.attributes["CustomDimension1"] = "value1"; span.attributes["CustomDimension2"] = "value2";
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
} }
-const provider = new NodeTracerProvider();
-const azureExporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>"
-});
-
-provider.addSpanProcessor(new SpanEnrichingProcessor());
-provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter));
+appInsights.getTraceHandler().addSpanProcessor(new SpanEnrichingProcessor());
``` ##### [Python](#tab/python)
class SpanEnrichingProcessor(SpanProcessor):
span._attributes["CustomDimension1"] = "Value1" span._attributes["CustomDimension2"] = "Value2" ```+ #### Set the user IP You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
-##### [.NET](#tab/net)
+##### [ASP.NET Core](#tab/aspnetcore)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
Java automatically populates this field.
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
-```javascript
+```typescript
... const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-class SpanEnrichingProcessor {
+class SpanEnrichingProcessor implements SpanProcessor{
... onEnd(span){
span._attributes["http.client_ip"] = "<IP Address>"
#### Set the user ID or authenticated user ID
-You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the following guidance. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
> [!IMPORTANT] > Consult applicable privacy laws before you set the Authenticated User ID.
-##### [.NET](#tab/net)
+##### [ASP.NET Core](#tab/aspnetcore)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span).
-Coming soon.
+```csharp
+activity?.SetTag("enduser.id", "<User Id>");
+```
##### [Java](#tab/java)
span._attributes["enduser.id"] = "<User ID>"
### Add Log Attributes
-#### [.NET](#tab/net)
-
-Coming soon.
+#### [ASP.NET Core](#tab/aspnetcore)
+
+OpenTelemetry uses .NET's ILogger.
+Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
#### [Java](#tab/java)
-Logback, Log4j, and java.util.logging are [auto-instrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
+Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
-* [Logback MDC](http://logback.qos.ch/manual/mdc.html)
-* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` will be captured as the log message)
+* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) * [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) #### [Node.js](#tab/nodejs)
-Coming soon.
+Currently unavailable.
#### [Python](#tab/python)
-The Python [logging](https://docs.python.org/3/howto/logging.html) library is [auto-instrumented](#logs). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
+The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](#logs). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
```python ...
logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
You might use the following ways to filter out telemetry before it leaves your application.
-#### [.NET](#tab/net)
+#### [ASP.NET Core](#tab/aspnetcore)
1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
1. Use a custom processor:
+ > [!TIP]
+ > Add the processor shown here *before* adding Azure Monitor.
+ ```csharp
- using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddProcessor(new ActivityFilteringProcessor())
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>"
- })
- .Build();
+ var builder = WebApplication.CreateBuilder(args);
+
+ builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityFilteringProcessor()));
+ builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
+ builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+ var app = builder.Build();
+
+ app.Run();
``` Add `ActivityFilteringProcessor.cs` to your project with the following code:
You might use the following ways to filter out telemetry before it leaves your a
} ```
-1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported.
+1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
#### [Java](#tab/java)
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http):
- ```javascript
- const { registerInstrumentations } = require( "@opentelemetry/instrumentation");
- const { HttpInstrumentation } = require( "@opentelemetry/instrumentation-http");
- const { NodeTracerProvider } = require( "@opentelemetry/sdk-trace-node");
-
- const httpInstrumentationConfig = {
- ignoreIncomingRequestHook: (request) => {
+ ```typescript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { IncomingMessage } = require("http");
+ const { RequestOptions } = require("https");
+ const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");
+
+ const httpInstrumentationConfig: HttpInstrumentationConfig = {
+ enabled: true,
+ ignoreIncomingRequestHook: (request: IncomingMessage) => {
// Ignore OPTIONS incoming requests if (request.method === 'OPTIONS') { return true; } return false; },
- ignoreOutgoingRequestHook: (options) => {
+ ignoreOutgoingRequestHook: (options: RequestOptions) => {
// Ignore outgoing requests with /test path if (options.path === '/test') { return true;
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
return false; } };-
- const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig);
- const provider = new NodeTracerProvider();
- provider.register();
-
- registerInstrumentations({
- instrumentations: [
- httpInstrumentation,
- ]
- });
+ const config = new ApplicationInsightsConfig();
+ config.instrumentations.http = httpInstrumentationConfig;
+ const appInsights = new ApplicationInsightsClient(config);
``` 2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`. Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
- ```javascript
+ ```typescript
const { SpanKind, TraceFlags } = require("@opentelemetry/api"); class SpanEnrichingProcessor {
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
} } ```
-
-#### [Python](#tab/python)
-1. Exclude the URL option provided by many HTTP instrumentation libraries.
+#### [Python](#tab/python)
- The following example shows how to exclude a specific URL from being tracked by using the [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) instrumentation configuration options in the `configure_azure_monitor()` function.
+1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable:
+ ```
+ export OTEL_PYTHON_EXCLUDED_URLS="http://localhost:8080/ignore"
+ ```
+ Doing so excludes the endpoint shown in the following Flask example:
```python ...
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
# Configure Azure monitor collection telemetry pipeline configure_azure_monitor( connection_string="<your-connection-string>",
- # Pass in instrumentation configuration via kwargs
- # Key: <instrumentation-name>_config
- # Value: Dictionary of configuration keys and values
- flask_config={"excluded_urls": "http://localhost:8080/ignore"},
) app = flask.Flask(__name__)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
### Get the trace ID or span ID
-You might want to get the trace ID or span ID. If you have logs that are sent to a different destination besides Application Insights, you might want to add the trace ID or span ID to enable better correlation when you debug and diagnose issues.
+You might want to get the trace ID or span ID. If you have logs sent to a destination other than Application Insights, consider adding the trace ID or span ID. Doing so enables better correlation when debugging and diagnosing issues.
+
+#### [ASP.NET Core](#tab/aspnetcore)
-#### [.NET](#tab/net)
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
-Coming soon.
+```csharp
+Activity activity = Activity.Current;
+string traceId = activity?.TraceId.ToHexString();
+string spanId = activity?.SpanId.ToHexString();
+```
#### [Java](#tab/java)
You can use `opentelemetry-api` to get the trace ID or span ID.
#### [Node.js](#tab/nodejs)
-Coming soon.
+Get the request trace ID and the span ID in your code:
+
+ ```javascript
+ const { trace } = require("@opentelemetry/api");
+
+ let spanId = trace.getActiveSpan().spanContext().spanId;
+ let traceId = trace.getActiveSpan().spanContext().traceId;
+ ```
#### [Python](#tab/python)
Get the request trace ID and the span ID in your code:
-## Enable the OTLP Exporter
-
-You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
-
-> [!NOTE]
-> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
-
-#### [.NET](#tab/net)
-
-1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package along with [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) in your project.
-
-1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs).
-
- ```csharp
- // Sends data to Application Insights as well as OTLP
- using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>"
- })
- .AddOtlpExporter()
- .Build();
- ```
-
-#### [Java](#tab/java)
-
-Coming soon.
-
-#### [Node.js](#tab/nodejs)
-
-1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package along with the [Azure Monitor OpenTelemetry Exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) in your project.
-
- ```sh
- npm install @opentelemetry/exporter-otlp-http
- npm install @azure/monitor-opentelemetry-exporter
- ```
-
-2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
-
- ```javascript
- const { BasicTracerProvider, SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base');
- const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-http');
- const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
-
- const provider = new BasicTracerProvider();
- const azureMonitorExporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>",
- });
- const otlpExporter = new OTLPTraceExporter();
- provider.addSpanProcessor(new SimpleSpanProcessor(azureMonitorExporter));
- provider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
- provider.register();
- ```
-
-#### [Python](#tab/python)
-
-1. Install the [azure-monitor-opentelemetry-exporter](https://pypi.org/project/azure-monitor-opentelemetry-exporter/) and [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) packages.
-
-1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see this [README](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry-exporter/samples/traces#collector).
-
- ```python
- from azure.monitor.opentelemetry import configure_azure_monitor
- from opentelemetry import trace
- from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
- from opentelemetry.sdk.trace import TracerProvider
- from opentelemetry.sdk.trace.export import BatchSpanProcessor
-
- configure_azure_monitor(
- connection_string="<your-connection-string>",
- )
- tracer = trace.get_tracer(__name__)
-
- exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>")
- otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317")
- span_processor = BatchSpanProcessor(otlp_exporter)
- trace.get_tracer_provider().add_span_processor(span_processor)
-
- with tracer.start_as_current_span("test"):
- print("Hello world!")
- ```
---
-## Configuration
-
-### Offline Storage and Automatic Retries
-
-To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In addition to exceeding the allowable time, telemetry will occasionally be dropped in high-load applications when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product will save more recent events over old ones. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage)
-
-#### [.NET](#tab/net)
-
-By default, the AzureMonitorExporter uses one of the following locations for offline storage (listed in order of precedence):
--- Windows
- - %LOCALAPPDATA%\Microsoft\AzureMonitor
- - %TEMP%\Microsoft\AzureMonitor
-- Non-Windows
- - %TMPDIR%/Microsoft/AzureMonitor
- - /var/tmp/Microsoft/AzureMonitor
- - /tmp/Microsoft/AzureMonitor
-
-To override the default directory, you should set `AzureMonitorExporterOptions.StorageDirectory`.
-
-For example:
-```csharp
-var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddAzureMonitorTraceExporter(o => {
- o.ConnectionString = "<Your Connection String>";
- o.StorageDirectory = "C:\\SomeDirectory";
- })
- .Build();
-```
-
-To disable this feature, you should set `AzureMonitorExporterOptions.DisableOfflineStorage = true`.
-
-#### [Java](#tab/java)
-
-Configuring Offline Storage and Automatic Retries is not available in Java.
-
-For a full list of available configurations, see [Configuration options](./java-standalone-config.md).
-
-#### [Node.js](#tab/nodejs)
-
-By default, the AzureMonitorExporter uses one of the following locations for offline storage.
--- Windows
- - %TEMP%\Microsoft\AzureMonitor
-- Non-Windows
- - %TMPDIR%/Microsoft/AzureMonitor
- - /var/tmp/Microsoft/AzureMonitor
-
-To override the default directory, you should set `storageDirectory`.
-
-For example:
-```javascript
-const exporter = new AzureMonitorTraceExporter({
- connectionString: "<Your Connection String>",
- storageDirectory: "C:\\SomeDirectory",
- disableOfflineStorage: false
-});
-```
-
-To disable this feature, you should set `disableOfflineStorage = true`.
-
-#### [Python](#tab/python)
-
-By default, the Azure Monitor exporters will use the following path:
-
-`<tempfile.gettempdir()>/Microsoft/AzureMonitor/opentelemetry-python-<your-instrumentation-key>`
-
-To override the default directory, you should set `storage_directory` to the directory you want.
-
-For example:
-```python
-...
-configure_azure_monitor(
- connection_string="your-connection-string",
- storage_directory="C:\\SomeDirectory",
-)
-...
-
-```
-
-To disable this feature, you should set `disable_offline_storage` to `True`. Defaults to `False`.
-
-For example:
-```python
-...
-configure_azure_monitor(
- connection_string="your-connection-string",
- disable_offline_storage=True,
-)
-...
-
-```
---
-## Troubleshooting
-
-This section provides help with troubleshooting.
-
-### Enable diagnostic logging
-
-#### [.NET](#tab/net)
-
-The Azure Monitor Exporter uses EventSource for its own internal logging. The exporter logs are available to any EventListener by opting into the source named OpenTelemetry-AzureMonitor-Exporter. For troubleshooting steps, see [OpenTelemetry Troubleshooting](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry#troubleshooting).
-
-#### [Java](#tab/java)
-
-Diagnostic logging is enabled by default. For more information, see the dedicated [troubleshooting article](java-standalone-troubleshoot.md).
-
-#### [Node.js](#tab/nodejs)
-
-Azure Monitor Exporter uses the OpenTelemetry API Logger for internal logs. To enable it, use the following code:
-
-```javascript
-const { diag, DiagConsoleLogger, DiagLogLevel } = require("@opentelemetry/api");
-const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
-
-const provider = new NodeTracerProvider();
-diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ALL);
-provider.register();
-```
-
-#### [Python](#tab/python)
-
-The Azure Monitor Exporter uses the Python standard logging [library](https://docs.python.org/3/library/logging.html) for its own internal logging. OpenTelemetry API and Azure Monitor Exporter logs are logged at the severity level of WARNING or ERROR for irregular activity. The INFO severity level is used for regular or successful activity. By default, the Python logging library sets the severity level to WARNING, so you must change the severity level to see logs under this severity setting. The following example shows how to output logs of *all* severity levels to the console *and* a file:
-
-```python
-...
-import logging
-
-logging.basicConfig(format="%(asctime)s:%(levelname)s:%(message)s", level=logging.DEBUG)
-
-logger = logging.getLogger(__name__)
-file = logging.FileHandler("example.log")
-stream = logging.StreamHandler()
-logger.addHandler(file)
-logger.addHandler(stream)
-...
-
-```
---
-### Known issues
-
-Known issues for the Azure Monitor OpenTelemetry Exporters include:
-
-#### [.NET](#tab/net)
--- Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.-- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.-- Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.-
-#### [Java](#tab/java)
-
-No known issues.
-
-#### [Node.js](#tab/nodejs)
--- Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.-- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.-- Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.-
-#### [Python](#tab/python)
--- Operation name is missing on dependency telemetry, which adversely affects failures and performance tab experience.-- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis.-- Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers.---- ## Support
-To get support:
-- Review troubleshooting steps in this article.-- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-
-### [.NET](#tab/net)
+### [ASP.NET Core](#tab/aspnetcore)
- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. - For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
To get support:
- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md). - For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). - For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly.-- For a list of open issues related to Azure Monitor Java Auto-Instrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues).
+- For a list of open issues related to Azure Monitor Java Autoinstrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues).
### [Node.js](#tab/nodejs)
To provide feedback:
## Next steps
-### [.NET](#tab/net)
+### [ASP.NET Core](#tab/aspnetcore)
-- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter/) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).
+- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)
+- To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).
+- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore/) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). ### [Java](#tab/java) -- Review [Java auto-instrumentation configuration options](java-standalone-config.md).-- To review the source code, see the [Azure Monitor Java auto-instrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).
+- Review [Java autoinstrumentation configuration options](java-standalone-config.md).
+- To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). - To enable usage experiences, see [Enable web or browser user monitoring](javascript.md). - See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub. ### [Node.js](#tab/nodejs) -- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter).-- To install the npm package, check for updates, or view release notes, see the [Azure Monitor Exporter npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter/samples).
+- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).
+- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). ### [Python](#tab/python) -- To review the source code and additional documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/microsoft/ApplicationInsights-Python/blob/main/azure-monitor-opentelemetry/README.md).-- To see additional samples and use cases, see [Azure Monitor Distro samples](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples).
+- To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/microsoft/ApplicationInsights-Python/blob/main/azure-monitor-opentelemetry/README.md).
+- To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples).
- See the [release notes](https://github.com/microsoft/ApplicationInsights-Python/releases) on GitHub. - To install the PyPI package, check for updates, or view release notes, see the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page. - To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python).-- To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contrib Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).
+- To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).
- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Nodejs Exporter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-exporter.md
+
+ Title: Enable the Azure Monitor OpenTelemetry exporter for Node.js applications
+description: This article provides guidance on how to enable the Azure Monitor OpenTelemetry exporter for Node.js applications.
+ Last updated : 05/10/2023
+ms.devlang: javascript
+++
+# Enable Azure Monitor OpenTelemetry for Node.js applications
+
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+
+## OpenTelemetry Release Status
+
+The OpenTelemetry exporter for Node.js is currently available as a public preview.
+
+[Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+## Get started
+
+Follow the steps in this section to instrument your application with OpenTelemetry.
+
+### Prerequisites
+
+- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)
+- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
+- Application using an officially [supported version](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) of Node.js runtime:
+ - [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes)
+ - [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments)
+
+### Install the client libraries
+
+Install these packages:
+
+- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base)
+- [@opentelemetry/sdk-trace-node](https://www.npmjs.com/package/@opentelemetry/sdk-trace-node)
+- [@azure/monitor-opentelemetry-exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter)
+- [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api)
+
+```sh
+npm install @opentelemetry/sdk-trace-base
+npm install @opentelemetry/sdk-trace-node
+npm install @azure/monitor-opentelemetry-exporter
+npm install @opentelemetry/api
+```
+
+The following packages are also used for some specific scenarios described later in this article:
+
+- [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics)
+- [@opentelemetry/resources](https://www.npmjs.com/package/@opentelemetry/resources)
+- [@opentelemetry/semantic-conventions](https://www.npmjs.com/package/@opentelemetry/semantic-conventions)
+- [@opentelemetry/instrumentation-http](https://www.npmjs.com/package/@opentelemetry/instrumentation-http)
+
+```sh
+npm install @opentelemetry/sdk-metrics
+npm install @opentelemetry/resources
+npm install @opentelemetry/semantic-conventions
+npm install @opentelemetry/instrumentation-http
+```
+
+### Enable Azure Monitor Application Insights
+
+This section provides guidance that shows how to enable OpenTelemetry.
+
+#### Instrument with OpenTelemetry
+
+The following code demonstrates how to enable OpenTelemetry in a simple JavaScript application:
+
+```javascript
+const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
+const { context, trace } = require("@opentelemetry/api")
+
+const provider = new NodeTracerProvider();
+provider.register();
+
+// Create an exporter instance.
+const exporter = new AzureMonitorTraceExporter({
+ connectionString: "<Your Connection String>"
+});
+
+// Add the exporter to the provider.
+provider.addSpanProcessor(
+ new BatchSpanProcessor(exporter)
+);
+
+// Create a tracer.
+const tracer = trace.getTracer("example-basic-tracer-node");
+
+// Create a span. A span must be closed.
+const parentSpan = tracer.startSpan("main");
+
+for (let i = 0; i < 10; i += 1) {
+ doWork(parentSpan);
+}
+// Be sure to end the span.
+parentSpan.end();
+
+function doWork(parent) {
+ // Start another span. In this example, the main method already started a
+ // span, so that will be the parent span, and this will be a child span.
+ const ctx = trace.setSpan(context.active(), parent);
+
+ // Set attributes to the span.
+ // Check the SpanOptions interface for more options that can be set into the span creation
+ const spanOptions = {
+ attributes: {
+ "key": "value"
+ }
+ };
+
+ const span = tracer.startSpan("doWork", spanOptions, ctx);
+
+ // Simulate some random work.
+ for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) {
+ // empty
+ }
+
+ // Annotate our span to capture metadata about our operation.
+ span.addEvent("invoking doWork");
+
+ // Mark the end of span execution.
+ span.end();
+}
+
+```
+
+#### Set the Application Insights connection string
+
+You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. In the event that both have been set, the programmatic connection string will take precedence.
+
+You can find your connection string in the Overview Pane of your Application Insights Resource.
++
+Here's how you set the connection string.
+
+Replace the `<Your Connection String>` in the preceding code with the connection string from *your* Application Insights resource.
+
+#### Confirm data is flowing
+
+Run your application and open your **Application Insights Resource** tab in the Azure portal. It might take a few minutes for data to show up in the portal.
++
+> [!IMPORTANT]
+> If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
+
+As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
+
+## Set the Cloud Role Name and the Cloud Role Instance
+
+You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
+
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
+
+```javascript
+...
+const { Resource } = require("@opentelemetry/resources");
+const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");
+const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
+const { MeterProvider } = require("@opentelemetry/sdk-metrics")
+
+// -
+// Setting role name and role instance
+// -
+const testResource = new Resource({
+ [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
+ [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
+ [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
+});
+
+// -
+// Done setting role name and role instance
+// -
+const tracerProvider = new NodeTracerProvider({
+ resource: testResource
+});
+
+const meterProvider = new MeterProvider({
+ resource: testResource
+});
+```
+
+## Enable Sampling
+
+You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary).
+
+> [!NOTE]
+> Metrics are unaffected by sampling.
+
+```javascript
+const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { ApplicationInsightsSampler, AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+
+// Sampler expects a sample rate of between 0 and 1 inclusive
+// A rate of 0.1 means approximately 10% of your traces are sent
+const aiSampler = new ApplicationInsightsSampler(0.75);
+
+const provider = new BasicTracerProvider({
+ sampler: aiSampler
+});
+
+const exporter = new AzureMonitorTraceExporter({
+ connectionString: "<Your Connection String>"
+});
+
+provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
+provider.register();
+```
+
+> [!TIP]
+> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
+
+## Instrumentation libraries
+
+The following libraries are validated to work with the current release.
+
+> [!WARNING]
+> Instrumentation libraries are based on experimental OpenTelemetry specifications, which impacts languages in [preview status](#opentelemetry-release-status). Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements.
+
+### Distributed Tracing
+Requests/Dependencies
+- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
+ [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
+
+Dependencies
+- [mysql](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql) version:
+ [0.25.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-mysql/v/0.25.0)
+
+### Metrics
+
+- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
+ [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
+
+### Logs
+
+Currently unavailable.
+
+## Collect custom telemetry
+
+This section explains how to collect custom telemetry from your application.
+
+Depending on your language and signal type, there are different ways to collect custom telemetry, including:
+
+- OpenTelemetry API
+- Language-specific logging/metrics libraries
+- Application Insights Classic API
+
+The following table represents the currently supported custom telemetry types:
+
+| Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
+|-||-|--|||-|--|
+| **Node.js** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;Winston, Pino, Bunyan | | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+
+### Add Custom Metrics
+
+> [!NOTE]
+> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+
+You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
+
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+
+The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
+
+| OpenTelemetry Instrument | Azure Monitor Aggregation Type |
+|||
+| Counter | Sum |
+| Asynchronous Counter | Sum |
+| Histogram | Min, Max, Average, Sum and Count |
+| Asynchronous Gauge | Average |
+| UpDownCounter | Sum |
+| Asynchronous UpDownCounter | Sum |
+
+> [!CAUTION]
+> Aggregation types beyond what's shown in the table typically aren't meaningful.
+
+The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument)
+describes the instruments and provides examples of when you might use each one.
+
+> [!TIP]
+> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric Classic API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+
+#### Histogram Example
+
+ ```javascript
+ const {
+ MeterProvider,
+ PeriodicExportingMetricReader,
+ } = require("@opentelemetry/sdk-metrics");
+ const {
+ AzureMonitorMetricExporter,
+ } = require("@azure/monitor-opentelemetry-exporter");
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString: "<Your Connection String>",
+ });
+
+ const metricReader = new PeriodicExportingMetricReader({
+ exporter: exporter,
+ });
+
+ provider.addMetricReader(metricReader);
+
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let histogram = meter.createHistogram("histogram");
+
+ histogram.record(1, { testKey: "testValue" });
+ histogram.record(30, { testKey: "testValue2" });
+ histogram.record(100, { testKey2: "testValue" });
+```
+
+#### Counter Example
+
+```javascript
+ const {
+ MeterProvider,
+ PeriodicExportingMetricReader,
+ } = require("@opentelemetry/sdk-metrics");
+ const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter");
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString: "<Your Connection String>",
+ });
+ const metricReader = new PeriodicExportingMetricReader({
+ exporter: exporter,
+ });
+ provider.addMetricReader(metricReader);
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let counter = meter.createCounter("counter");
+ counter.add(1, { "testKey": "testValue" });
+ counter.add(5, { "testKey2": "testValue" });
+ counter.add(3, { "testKey": "testValue2" });
+```
+
+#### Gauge Example
+
+```javascript
+ const {
+ MeterProvider,
+ PeriodicExportingMetricReader
+ } = require("@opentelemetry/sdk-metrics");
+ const { AzureMonitorMetricExporter } = require("@azure/monitor-opentelemetry-exporter");
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString:
+ connectionString: "<Your Connection String>",
+ });
+ const metricReader = new PeriodicExportingMetricReader({
+ exporter: exporter
+ });
+ provider.addMetricReader(metricReader);
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let gauge = meter.createObservableGauge("gauge");
+ gauge.addCallback((observableResult) => {
+ let randomNumber = Math.floor(Math.random() * 100);
+ observableResult.observe(randomNumber, {"testKey": "testValue"});
+ });
+```
+
+### Add Custom Exceptions
+
+Select instrumentation libraries automatically report exceptions to Application Insights.
+However, you may want to manually report exceptions beyond what instrumentation libraries report.
+For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them
+to draw attention in relevant experiences including the failures section and end-to-end transaction views.
+
+```javascript
+const { trace } = require("@opentelemetry/api");
+const { BasicTracerProvider, SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+
+const provider = new BasicTracerProvider();
+const exporter = new AzureMonitorTraceExporter({
+ connectionString: "<Your Connection String>",
+});
+provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
+provider.register();
+const tracer = trace.getTracer("example-basic-tracer-node");
+let span = tracer.startSpan("hello");
+try{
+ throw new Error("Test Error");
+}
+catch(error){
+ span.recordException(error);
+}
+```
+
+### Add Custom Spans
+
+You may want to add a custom span when there's a dependency request that's not already collected by an instrumentation library or an application process that you wish to model as a span on the end-to-end transaction view.
+
+```javascript
+const { trace } = require("@opentelemetry/api");
+let tracer = trace.getTracer("testTracer");
+let customSpan = tracer.startSpan("testSpan");
+...
+customSpan.end();
+```
+
+### Send custom telemetry using the Application Insights Classic API
+
+We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights Classic APIs.
+
+
+## Modify telemetry
+
+This section explains how to modify telemetry.
+
+### Add span attributes
+
+These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP.
+
+#### Add a custom property to a Span
+
+Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+
+Use a custom processor:
+
+> [!TIP]
+> Add the processor shown here *before* the Azure Monitor Exporter.
+
+```javascript
+const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
+const { SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+
+class SpanEnrichingProcessor {
+ forceFlush() {
+ return Promise.resolve();
+ }
+ shutdown() {
+ return Promise.resolve();
+ }
+ onStart(_span){}
+ onEnd(span){
+ span.attributes["CustomDimension1"] = "value1";
+ span.attributes["CustomDimension2"] = "value2";
+ }
+}
+
+const provider = new NodeTracerProvider();
+const azureExporter = new AzureMonitorTraceExporter({
+ connectionString: "<Your Connection String>"
+});
+
+provider.addSpanProcessor(new SpanEnrichingProcessor());
+provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter));
+```
+
+#### Set the user IP
+
+You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+```javascript
+...
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+
+class SpanEnrichingProcessor {
+ ...
+
+ onEnd(span){
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ }
+}
+```
+
+#### Set the user ID or authenticated user ID
+
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+
+> [!IMPORTANT]
+> Consult applicable privacy laws before you set the Authenticated User ID.
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+```typescript
+...
+import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
+
+class SpanEnrichingProcessor implements SpanProcessor{
+ ...
+
+ onEnd(span: ReadableSpan){
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ }
+}
+```
+
+### Add Log Attributes
+
+Currently unavailable.
+
+### Filter telemetry
+
+You might use the following ways to filter out telemetry before it leaves your application.
+
+1. Exclude the URL option provided by many HTTP instrumentation libraries.
+
+ The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http):
+
+ ```javascript
+ const { registerInstrumentations } = require( "@opentelemetry/instrumentation");
+ const { HttpInstrumentation } = require( "@opentelemetry/instrumentation-http");
+ const { NodeTracerProvider } = require( "@opentelemetry/sdk-trace-node");
+
+ const httpInstrumentationConfig = {
+ ignoreIncomingRequestHook: (request) => {
+ // Ignore OPTIONS incoming requests
+ if (request.method === 'OPTIONS') {
+ return true;
+ }
+ return false;
+ },
+ ignoreOutgoingRequestHook: (options) => {
+ // Ignore outgoing requests with /test path
+ if (options.path === '/test') {
+ return true;
+ }
+ return false;
+ }
+ };
+
+ const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig);
+ const provider = new NodeTracerProvider();
+ provider.register();
+
+ registerInstrumentations({
+ instrumentations: [
+ httpInstrumentation,
+ ]
+ });
+ ```
+
+2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+ ```javascript
+ const { SpanKind, TraceFlags } = require("@opentelemetry/api");
+
+ class SpanEnrichingProcessor {
+ ...
+
+ onEnd(span) {
+ if(span.kind == SpanKind.INTERNAL){
+ span.spanContext().traceFlags = TraceFlags.NONE;
+ }
+ }
+ }
+ ```
+
+### Get the trace ID or span ID
+
+You might want to get the trace ID or span ID. If you have logs that are sent to a different destination besides Application Insights, you might want to add the trace ID or span ID to enable better correlation when you debug and diagnose issues.
+
+ ```javascript
+ const { trace } = require("@opentelemetry/api");
+
+ let spanId = trace.getActiveSpan().spanContext().spanId;
+ let traceId = trace.getActiveSpan().spanContext().traceId;
+ ```
+
+## Enable the OTLP Exporter
+
+You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
+
+> [!NOTE]
+> The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
+
+1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package along with the [Azure Monitor OpenTelemetry Exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) in your project.
+
+ ```sh
+ npm install @opentelemetry/exporter-otlp-http
+ npm install @azure/monitor-opentelemetry-exporter
+ ```
+
+2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
+
+ ```javascript
+ const { BasicTracerProvider, SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base');
+ const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-http');
+ const { AzureMonitorTraceExporter } = require("@azure/monitor-opentelemetry-exporter");
+
+ const provider = new BasicTracerProvider();
+ const azureMonitorExporter = new AzureMonitorTraceExporter({
+ connectionString: "<Your Connection String>",
+ });
+ const otlpExporter = new OTLPTraceExporter();
+ provider.addSpanProcessor(new SimpleSpanProcessor(azureMonitorExporter));
+ provider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
+ provider.register();
+ ```
+
+## Support
+
+- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
+
+## OpenTelemetry feedback
+
+To provide feedback:
+
+- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).
+- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).
+- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).
+- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
+
+## Next steps
+
+- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter).
+- To install the npm package, check for updates, or view release notes, see the [Azure Monitor Exporter npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter/samples).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview description: This article provides an overview of how to use OpenTelemetry with Azure Monitor. Previously updated : 01/10/2023 Last updated : 05/10/2023
Telemetry, the data collected to observe your application, can be broken into th
- Metrics - Logs
-Initially, the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include Distributed Tracing.
+A complete observability story includes all three pillars. Our [Azure Monitor OpenTelemetry Distros for ASP.NET Core, Java, JavaScript (Node.js), and Python](opentelemetry-enable.md) include everything you need to power Application Performance Monitoring on Azure. The Distro itself is free to install, and you only pay for the data you ingest in Azure Monitor.
The following sources explain the three pillars:
There are two methods to instrument your application:
- Manual instrumentation - Automatic instrumentation (auto-instrumentation)
-Manual instrumentation is coding against the OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md).
+Manual instrumentation is coding against the OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of [Azure Monitor OpenTelemetry Distros for .NET, Python, and JavaScript (Node.js)](opentelemetry-enable.md).
> [!IMPORTANT] > "Manual" doesn't mean you'll be required to write complex code to define spans for distributed traces, although it remains an option. A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries. >
-> A subset of OpenTelemetry instrumentation libraries will be supported by Azure Monitor, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+> A subset of OpenTelemetry instrumentation libraries are included in the Azure Monitor OpenTelemetry Distros, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
-Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The Azure Monitor OpenTelemetry-based auto-instrumentation offering consists of the [Java 3.X OpenTelemetry-based GA offering](opentelemetry-enable.md?tabs=java). We continue to invest in it informed by customer feedback. The OpenTelemetry community is also experimenting with C# and Python auto-instrumentation, but Azure Monitor is focused on creating a simple and effective manual instrumentation story in the near term.
+Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The [Azure Monitor OpenTelemetry Java Distro](opentelemetry-enable.md?tabs=java) uses the auto-instrumentation method.
### Send your telemetry
There are two ways to send your data to Azure Monitor (or any vendor):
A direct exporter sends telemetry in-process (from the application's code) directly to the Azure Monitor ingestion endpoint. The main advantage of this approach is onboarding simplicity.
-*All currently supported OpenTelemetry-based offerings in Azure Monitor use a direct exporter*.
+*The currently available Azure Monitor OpenTelemetry Distros rely on a direct exporter*.
Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry-supported language to send to Azure Monitor via [Open Telemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md). > [!NOTE]
-> Some customers have begun to use the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md) as an agent alternative even though Microsoft doesn't officially support the "via an agent" approach for application monitoring yet. In the meantime, the open-source community has contributed an OpenTelemetry-Collector Azure Monitor exporter that some customers are using to send data to Azure Monitor Application Insights.
+> For Azure Monitor's position on the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md), see the [OpenTelemetry FAQ](../faq.yml#can-i-use-the-opentelemetry-collector-).
## Terms
Traces | Logs
## Next steps
-The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. The available functionality and limitations of each offering are explained so that you can determine whether OpenTelemetry is right for your project.
+1. The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings.
- [.NET](opentelemetry-enable.md?tabs=net) - [Java](opentelemetry-enable.md?tabs=java) - [JavaScript](opentelemetry-enable.md?tabs=nodejs) - [Python](opentelemetry-enable.md?tabs=python)+
+2. Check out the [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
azure-monitor Container Insights Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md
# Collect Prometheus metrics with Container insights [Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that Prometheus typically collects from the cluster without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md).
-Container insights can also scrape Prometheus metrics from your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram.
+Container insights can also scrape your custom Prometheus metrics from your application on your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus (preview). This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram. Metrics sent to the Log Analytics workspace are queried through log queries, whereas Metrics sent through Azure Monitor managed Prometheus are queried through PromQL and Prometheus recording rules and alerts are supported.
:::image type="content" source="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" lightbox="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" alt-text="Diagram of container monitoring architecture sending Prometheus metrics to Azure Monitor Logs." border="false":::
Use the following procedure to add Prometheus collection to your cluster that's
6. Click **Configure** to complete the configuration.
-See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations)
+See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations-during-enablementdeployment)
## Send metrics to Azure Monitor Logs You may want to collect more data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace.
-### Prometheus scraping settings
+### Prometheus scraping settings (for metrics stored as logs)
-Active scraping of metrics from Prometheus is performed from one of two perspectives:
+Active scraping of metrics from Prometheus is performed from one of two perspectives below and metrics are sent to configured log analytics workspace :
- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*. - **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
When a URL is specified, Container insights only scrapes the endpoint. When Kube
| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. | | Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-### Configure ConfigMaps
+### Configure ConfigMaps to specify Prometheus scrape configuration (for metrics stored as logs)
Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
Perform the following steps to configure your ConfigMap configuration file for y
To configure scraping of Prometheus metrics by specifying a pod annotation:
- 1. In the ConfigMap, specify the following configuration:
+ 1. In the ConfigMap, specify the following configuration:
``` prometheus-data-collection-settings: |- ΓÇï
Perform the following steps to configure your ConfigMap configuration file for y
monitor_kubernetes_pods = true ```
- 1. Specify the following configuration for pod annotations:
+ 2. Specify the following configuration for pod annotations:
``` - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
Last updated 03/28/2023
This article shows you how to create and delete an Azure Monitor workspace. When you configure Azure Monitor managed service for Prometheus, you can select an existing Azure Monitor workspace or create a new one. > [!NOTE]
-> When you create an Azure Monitor workspace, by default a data collection rule and a data collection endpoint in the form `<azure-workspace-name>` will automatically be created in a resource group in the form `MA_<azure-workspace-name>_<location>_managed`.
+> When you create an Azure Monitor workspace, by default a data collection rule and a data collection endpoint in the form `<azure-monitor-workspace-name>` will automatically be created in a resource group in the form `MA_<azure-monitor-workspace-name>_<location>_managed`.
## Create an Azure Monitor workspace ### [Azure portal](#tab/azure-portal)
resource workspace 'microsoft.monitor/accounts@2021-06-03-preview' = {
```
-When you create an Azure Monitor workspace, a new resource group is created. The resource group name has the following format: `MA_<azure monitor workspace resource name>_<location code>_managed`, where the tokenized elements are in lower case. The resource group contains a data collection endpoint, and a data collection rule with the same name as the workspace. The resource group and its resources are automatically deleted when you delete the workspace.
+When you create an Azure Monitor workspace, a new resource group is created. The resource group name has the following format: `MA_<azure-monitor-workspace-name>_<location>_managed`, where the tokenized elements are lowercased. The resource group contains both a data collection endpoint and a data collection rule with the same name as the workspace. The resource group and its resources are automatically deleted when you delete the workspace.
To connect your Azure Monitor managed service for Prometheus to your Azure Monitor workspace, see [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md) ## Delete an Azure Monitor workspace
-When you delete an Azure Monitor workspace, no soft-delete operation is performed like with a [Log Analytics workspace](../logs/delete-workspace.md). The data in the workspace is immediately deleted, and there's no recovery option.
+When you delete an Azure Monitor workspace, unlike with a [Log Analytics workspace](../logs/delete-workspace.md), there is no soft delete operation. The data in the workspace is immediately deleted, and there's no recovery option.
### [Azure portal](#tab/azure-portal)
When you delete an Azure Monitor workspace, no soft-delete operation is performe
:::image type="content" source="media/azure-monitor-workspace-overview/delete-azure-monitor-workspace.png" lightbox="media/azure-monitor-workspace-overview/delete-azure-monitor-workspace.png" alt-text="Screenshot of Azure Monitor workspaces delete button."::: ### [CLI](#tab/cli)
-To delete an AzureMonitor workspace use [az resource delete](https://learn.microsoft.com/cli/azure/resource#az-resource-delete)
+To delete an AzureMonitor workspace use `az resource delete`
For example: ```azurecli
For information on deleting resources and Azure Resource Manager, see [Azure Res
Connect an Azure Monitor workspace to an [Azure Managed Grafana](../../managed-grafan) workspace to allow Grafana to use the Azure Monitor workspace data in a Grafana dashboard. An Azure Monitor workspace can be connected to multiple Grafana workspaces, and a Grafana workspace can be connected to multiple Azure Monitor workspaces. > [!NOTE]
-> When you add the Azure Monitor workspace as a data source to Grafana, it will be listed in form `Managed_Prometheus_<azure-workspace-name>`.
+> When you add the Azure Monitor workspace as a data source to Grafana, it will be listed in form `Managed_Prometheus_<azure-monitor-workspace-name>`.
### [Azure portal](#tab/azure-portal) 1. Open the **Azure Monitor workspace** menu in the Azure portal.
Connect an Azure Monitor workspace to an [Azure Managed Grafana](../../managed-g
Create a link between the Azure Monitor workspace and the Grafana workspace by updating the Azure Kubernetes Service cluster that you're monitoring.
+If your cluster is already configured to send data to an Azure Monitor managed service for Prometheus, you must disable it first using the following command:
+ ```azurecli
-az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
-<azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+az aks update --disable-azuremonitormetrics -g <cluster-resource-group> -n <cluster-name>
```
-If your cluster is already configured to send data to an Azure Monitor managed service for Prometheus, disable it first using the following command:
+Then, either enable or re-enable using the following command:
```azurecli
-az aks update --disable-azuremonitormetrics -g <cluster-resource-group> -n <cluster-name>
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
+<azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
``` Output
To set up an Azure monitor workspace as a data source for Grafana using a Resour
-If your Grafana Instance is self managed, see [Use Azure Monitor managed service for Prometheus (preview) as data source for self-managed Grafana using managed system identity](./prometheus-self-managed-grafana-azure-active-directory.md)
+If your Grafana instance is self managed, see [Use Azure Monitor managed service for Prometheus (preview) as data source for self-managed Grafana using managed system identity](./prometheus-self-managed-grafana-azure-active-directory.md)
If your Grafana Instance is self managed, see [Use Azure Monitor managed service
## Next steps-- [Links a Grafana instance to your Azure monitor Workspace](./prometheus-metrics-enable.md#enable-prometheus-metric-collection)
+- [Link a Grafana instance to your Azure Monitor workspace](./prometheus-metrics-enable.md#enable-prometheus-metric-collection)
- Learn more about the [Azure Monitor data platform](../data-platform.md).-- [Azure Monitor Workspace Overview](./azure-monitor-workspace-overview.md)
+- [Azure Monitor workspace Overview](./azure-monitor-workspace-overview.md)
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
Last updated 01/22/2023
An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. > [!Note]
-> Log Analytics workspaces contain logs and metrics data from multiple Azure resources, whereas Azure Monitor workspaces contain only metrics related to Prometheus.
+> Log Analytics workspaces contain logs and metrics data from multiple Azure resources, whereas Azure Monitor workspaces currently contain only metrics related to Prometheus.
## Contents of Azure Monitor workspace Azure Monitor workspaces will eventually contain all metric data collected by Azure Monitor. Currently, only Prometheus metrics are data hosted in an Azure Monitor workspace.
The following table presents criteria to consider when designing an Azure Monito
|Criteria|Description| |||
-|Segregate by logical boundaries |Create separate Azure Monitor workspaces for operational data based on logical boundaries, for example, a role, application type, type of metric etc.|
+|Segregate by logical boundaries |Create separate Azure Monitor workspaces for operational data based on logical boundaries, such as by a role, application type, type of metric etc.|
|Azure tenants | For multiple Azure tenants, create an Azure Monitor workspace in each tenant. Data sources can only send monitoring data to an Azure Monitor workspace in the same Azure tenant. | |Azure regions |Each Azure Monitor workspace resides in a particular Azure region. Regulatory or compliance requirements may dictate the storage of data in particular locations. |
-|Data ownership |Create separate Azure Monitor workspaces to define data ownership, for example by subsidiaries or affiliated companies.|
+|Data ownership |Create separate Azure Monitor workspaces to define data ownership, such as by subsidiaries or affiliated companies.|
### Considerations when creating an Azure Monitor workspace
-* Azure Monitor workspaces are regional. When you create a new Azure Monitor workspace, you provide a region, setting the location in which the data is stored.
+* Azure Monitor workspaces are regional. When you create a new Azure Monitor workspace, you provide a region which sets the location in which the data is stored.
* Start with a single workspace to reduce the complexity of managing and querying data from multiple Azure resources. * The default Azure Monitor workspace limit is 1 million active times series and 1 million events per minute ingested.
-* There's no reduction in performance due to the amount of data in your Azure Monitor workspace. Multiple services can send data to the same account simultaneously. There is, however, a limit on how large an Azure Monitor workspace can scale as explained below.
+* There's no reduction in performance due to the amount of data in your Azure Monitor workspace. Multiple services can send data to the same account simultaneously. There is, however, a limit on how much an Azure Monitor workspace can scale, as explained below.
### Growing account capacity
-Azure Monitor workspaces have default quotas and limitations for metrics. As your product grows and you need more metrics, you can request an increase to 50 million events or active time series. If your capacity needs grow exceptionally large, and your data ingestion needs can no longer be met by a single Azure Monitor workspace, consider creating multiple Azure Monitor workspaces.
+Azure Monitor workspaces have default quotas and limitations for metrics. As your product grows and you need more metrics, you can request an increase to 50 million events or active time series. If your capacity needs to be exceptionally large and your data ingestion needs can no longer be met by a single Azure Monitor workspace, consider creating multiple Azure Monitor workspaces.
### Multiple Azure Monitor workspaces
-When an Azure Monitor workspace reaches 80% of its maximum capacity, or depending on your current and forecasted metrics volume, it's recommended to split the Azure Monitor workspace into multiple workspaces. Split the workspace based on how the data in the workspace is used by your applications and business processes, and how you want to access that data in the future.
+When an Azure Monitor workspace reaches 80% of its maximum capacity or is forecasted to reach that volume of metrics, it's recommended to split the Azure Monitor workspace into multiple workspaces. You should split the workspace based on how the data in the workspace is used by your applications and business processes and by how you want to access that data in the future.
-In certain circumstances, splitting Azure Monitor workspace into multiple workspaces can be necessary. For example:
-* Monitoring data in sovereign clouds ΓÇô Create Azure Monitor workspace(s) in each sovereign cloud.
-* Compliance or regulatory requirements that mandate storage of data in specific regions ΓÇô Create an Azure Monitor workspace per region as per requirements. There may be a need to manage the scale of metrics for large services or financial institutions with regional accounts.
-* Separating metric data in test, pre-production, and production environments
+In certain circumstances, splitting an Azure Monitor workspace into multiple workspaces can be necessary. For example:
+* Monitoring data in sovereign clouds: Create an Azure Monitor workspace in each sovereign cloud.
+* Compliance or regulatory requirements that mandate storage of data in specific regions: Create an Azure Monitor workspace per region as per requirements. There may be a need to manage the scale of metrics for large services or financial institutions with regional accounts.
+* Separating metrics in test, pre-production, and production environments: Create an Azure Monitor workspace per environment.
>[!Note]
-> A single query cannot access multiple Azure Monitor workspaces. Keep data that you want to retrieve in a single query in same workspace. For presentation purposes, setting up Grafana with each workspace as a dedicated data source will allow for querying multiple workspaces in a single Grafana panel.
+> A single query cannot access multiple Azure Monitor workspaces. Keep data that you want to retrieve in a single query in same workspace. For visualization purposes, setting up Grafana with each workspace as a dedicated data source will allow for querying multiple workspaces in a single Grafana panel.
## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.-- Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.-- Azure monitor workspaces are currently only supported in public clouds.-- Azure monitor workspaces don't currently support being moved into a different subscription or resource group once created.
+- Azure Monitor Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.
+- Azure Monitor workspaces are currently only supported in public clouds.
+- Azure Monitor workspaces don't currently support being moved into a different subscription or resource group once created.
## Next steps
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 04/13/2023 Last updated : 05/07/2023
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 04/13/2023.
+Date list was last updated: 05/07/2023.
Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Thu Apr 13 2023 22:24:40 GMT+0300 (Israel Daylight Time)-->
+<!--Gen Date: Sun May 07 2023 12:43:57 GMT+0300 (Israel Daylight Time)-->
azure-monitor Prometheus Api Promql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-api-promql.md
# Query Prometheus metrics using the API and PromQL
-Azure Monitor managed service for Prometheus (preview), collects metrics from Azure Kubernetes Clusters and stores them in an Azure Monitor workspace. PromQL - Prometheus query language, is a functional query language that allows you to query and aggregate time series data. Use PromQL to query and aggregate metrics stored in an Azure Monitor workspace.
+Azure Monitor managed service for Prometheus (preview), collects metrics from Azure Kubernetes clusters and stores them in an Azure Monitor workspace. PromQL (Prometheus query language), is a functional query language that allows you to query and aggregate time series data. Use PromQL to query and aggregate metrics stored in an Azure Monitor workspace.
This article describes how to query an Azure Monitor workspace using PromQL via the REST API. For more information on PromQL, see [Querying prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/). ## Prerequisites To query an Azure monitor workspace using PromQL, you need the following prerequisites:
-+ An Azure Kubernetes Cluster or remote Kubernetes cluster.
-+ Azure Monitor managed service for Prometheus (preview) scraping metrics from a Kubernetes cluster
-+ An Azure Monitor Workspace where Prometheus metrics are being stored.
++ An Azure Kubernetes cluster or remote Kubernetes cluster.++ Azure Monitor managed service for Prometheus (preview) scraping metrics from a Kubernetes cluster.++ An Azure Monitor workspace where Prometheus metrics are being stored. ## Authentication To query your Azure Monitor workspace, authenticate using Azure Active Directory.
-The API supports Azure Active Directory authentication using Client credentials. Register a client app with Azure Active Directory and request a token.
+The API supports Azure Active Directory authentication using client credentials. Register a client app with Azure Active Directory and request a token.
To set up Azure Active Directory authentication, follow the steps below:
To set up Azure Active Directory authentication, follow the steps below:
1. To register an app, follow the steps in [Register an App to request authorization tokens and work with APIs](../logs/api/register-app-for-token.md?tabs=portal) ### Allow your app access to your workspace
-Assign the *Monitoring Data Reader* role your app so it can query data from your Azure Monitor workspace.
+Assign the **Monitoring Data Reader** role your app so it can query data from your Azure Monitor workspace.
1. Open your Azure Monitor workspace in the Azure portal.
Assign the *Monitoring Data Reader* role your app so it can query data from your
:::image type="content" source="./media/prometheus-api-promql/select-members.png" lightbox="./media/prometheus-api-promql/select-members.png" alt-text="A screenshot showing the Add role assignment, select members page.":::
-You've created your App registration and have assigned it access to query data from your Azure Monitor workspace. You can now generate a token and use it in a query.
+You've created your app registration and have assigned it access to query data from your Azure Monitor workspace. You can now generate a token and use it in a query.
### Request a token
Save the access token from the response for use in the following HTTP requests.
## Query endpoint
-Find your workspace's query endpoint on the Azure Monitor workspace overview page.
+Find your Azure Monitor workspace's query endpoint on the Azure Monitor workspace overview page.
:::image type="content" source="./media/prometheus-api-promql/find-query-endpoint.png" lightbox="./media/prometheus-api-promql/find-query-endpoint.png" alt-text="A screenshot sowing the query endpoint on the Azure Monitor workspace overview page.":::
For the full specification of OSS prom APIs, see [Prometheus HTTP API](https://p
## API limitations The following limitations are in addition to those detailed in the Prometheus specification.
-+ Query must be scoped to metric
++ Query must be scoped to a metric Any time series fetch queries (/series or /query or /query_range) must contain a \_\_name\_\_ label matcher. That is, each query must be scoped to a metric. There can only be one \_\_name\_\_ label matcher in a query. + Supported time range
- + /query_range API supports a time range of 32 days. This is the maximum time range allowed including range selectors specified in the query itself.
- For example, the query `rate(http_requests_total[1h]` for last 24 hours would actually mean data is being queried for 25 hours. A 24 hours range + 1 hour specified in query itself.
+ + /query_range API supports a time range of 32 days. This is the maximum time range allowed, including range selectors specified in the query itself.
+ For example, the query `rate(http_requests_total[1h]` for last the 24 hours would actually mean data is being queried for 25 hours. This comes from the 24-hour range plus the 1 hour specified in query itself.
+ /series API fetches data for a maximum 12-hour time range. If `endTime` isn't provided, endTime = time.now(). If the time rage is greater than 12 hours, the `startTime` is set to `endTime ΓÇô 12h` + Ignored time range
- Start time and end time provided with `/labels` and `/label/__name__/values` are ignored, and all retained data in the Azure Monitor Workspace is queried.
+ Start time and end time provided with `/labels` and `/label/__name__/values` are ignored, and all retained data in the Azure Monitor workspace is queried.
+ Experimental features Experimental features such as exemplars aren't supported.
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
Last updated 09/28/2022
[Azure Monitor managed service for Prometheus (preview)](prometheus-metrics-overview.md) allows you to collect and analyze metrics at scale using a [Prometheus](https://aka.ms/azureprometheus-promio)-compatible monitoring solution. The most common way to analyze and present Prometheus data is with a Grafana dashboard. This article explains how to configure Prometheus as a data source for both [Azure Managed Grafana](../../managed-grafan) and [self-hosted Grafana](https://grafana.com/) running in an Azure virtual machine using managed system identity authentication.
-For information on using Grafana with Active Directory, see [Configure self-managed Grafana to use Azure-managed Prometheus with Azure Active Directory](./prometheus-self-managed-grafana-azure-active-directory.md).
+For information on using Grafana with Active Directory, see [Configure self-managed Grafana to use Azure Monitor managed Prometheus with Azure Active Directory](./prometheus-self-managed-grafana-azure-active-directory.md).
## Azure Managed Grafana The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for Azure Managed Grafana.
The following sections describe how to configure Azure Monitor managed service f
Your Grafana workspace requires the following settings: - System managed identity enabled-- *Monitoring Data Reader* role for the Azure Monitor workspace
+- **Monitoring Data Reader** role for the Azure Monitor workspace
-Both of these settings are configured by default when you created your Grafana workspace. Verify these settings on the **Identity** page for your Grafana workspace.
+Both of these settings are configured by default when you created your Grafana workspace and linked it to an Azure Monitor workspace. Verify these settings on the **Identity** page for your Grafana workspace.
:::image type="content" source="media/prometheus-grafana/grafana-system-identity.png" alt-text="Screenshot of Identity page for Azure Managed Grafana." lightbox="media/prometheus-grafana/grafana-system-identity.png":::
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
# Collect Prometheus metrics from an AKS cluster (preview)
-This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. Then you specify the Azure Monitor workspace where the data should be sent.
+This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. In addition, you'll specify the Azure Monitor workspace where the data should be sent.
> [!NOTE]
-> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same one used by Container insights.
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster. However, both agents do use the Azure Monitor agent.
> >For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md). For details on adding Prometheus collection to a cluster that already has Container insights enabled, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md).
+The Azure Monitor metrics agent's architecture utilizes a ReplicaSet and a DaemonSet. The ReplicaSet pod scrapes cluster-wide targets such as `kube-state-metrics` and custom application targets that are specified. The DaemonSet pods scrape targets solely on the node that the respective pod is deployed on, such as `node-exporter`. This is so that the agent can scale as the number of nodes and pods on a cluster increases.
+ ## Prerequisites - You must either have an [Azure Monitor workspace](azure-monitor-workspace-overview.md) or [create a new one](azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace).
This article describes how to configure your Azure Kubernetes Service (AKS) clus
Use any of the following methods to install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace. ### [Azure portal](#tab/azure-portal)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster. 1. Select **Managed Prometheus** to display a list of AKS clusters.
Use any of the following methods to install the Azure Monitor agent on your AKS
:::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration."::: ### [CLI](#tab/cli)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
#### Prerequisites
Use any of the following methods to install the Azure Monitor agent on your AKS
Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics add-on. Depending on the Azure Monitor workspace and Grafana workspace you want to use, choose one of the following options: - **Create a new default Azure Monitor workspace.**<br>
-If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
-This Azure Monitor workspace is in the region specified in [Region mappings](#region-mappings).
+If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in a resource group with the name `DefaultRG-<cluster_region>` and is named `DefaultAzureMonitorWorkspace-<mapped_region>`.
+ ```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> ``` - **Use an existing Azure Monitor workspace.**<br>
-If the Azure Monitor workspace is linked to one or more Grafana workspaces, the data is available in Grafana.
+If the existing Azure Monitor workspace is already linked to one or more Grafana workspaces, data is available in that Grafana workspace.
```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
The output is similar to the following example:
## [Azure Resource Manager](#tab/resource-manager)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+ ### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.-- The template must be deployed in the same resource group as the Azure Managed Grafana workspace.-- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor workspace subscription, register the Azure Monitor workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+- The template must be deployed in the same resource group as the Azure Managed Grafana instance.
+- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Data Reader` role directly by deploying the template.
### Retrieve required values for Grafana resource+
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+ On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
The final `azureMonitorWorkspaceResourceId` entry is already in the template and
## [Bicep](#tab/bicep)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+ ### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.-- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.-- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+- The template needs to be deployed in the same resource group as the Azure Managed Grafana instance.
+- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Data Reader` role directly by deploying the template.
-### Minor limitation with Bicep deployment
-Currently in Bicep, there's no way to explicitly "scope" the Monitoring Data Reader role assignment on a string parameter "resource ID" for an Azure Monitor workspace (like in an ARM template). Bicep expects a value of type `resource | tenant`. Currently, there's no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
+### Limitation with Bicep deployment
+Currently in Bicep, there's no way to explicitly scope the `Monitoring Data Reader` role assignment on a string parameter "resource ID" for an Azure Monitor workspace (like in an ARM template). Bicep expects a value of type `resource | tenant`. There also is no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
-As a workaround, the default scoping for the Monitoring Data Reader role is on the resource group. The role is applied on the same Azure Monitor workspace (by inheritance), which is the expected behavior. After you deploy this Bicep template, the Grafana resource gets read permissions in all the Azure Monitor workspaces under the subscription.
+Therefore, the default scoping for the `Monitoring Data Reader` role is on the resource group. The role is applied on the same Azure Monitor workspace (by inheritance), which is the expected behavior. After you deploy this Bicep template, the Grafana instance is given `Monitoring Data Reader` permissions for all the Azure Monitor workspaces in that resource group.
### Retrieve required values for a Grafana resource On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
If you're using an existing Azure Managed Grafana instance that's already linked
### Download and edit templates and the parameter file
-1. Download the main Bicep template from [this GitHub file](https://aka.ms/azureprometheus-enable-bicep-template). Save it as **FullAzureMonitorMetricsProfile.bicep**.
-1. Download the parameter file from [this GitHub file](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main Bicep template.
-1. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main Bicep template.
-1. Edit the values in the parameter file.
-1. The main Bicep template creates all the required resources. It uses two modules for creating the Data Collection Rule Associations (DCRA) and Azure Monitor metrics profile resources from the other two Bicep files.
+1. Download the [main Bicep template](https://aka.ms/azureprometheus-enable-bicep-template). Save it as **FullAzureMonitorMetricsProfile.bicep**.
+2. Download the [parameter file](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main Bicep template.
+3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files into the same directory as the main Bicep template.
+4. Edit the values in the parameter file.
+5. The main Bicep template creates all the required resources. It uses two modules for creating the Data Collection Rule Associations (DCRA) and Azure Monitor metrics profile resources from the other two Bicep files.
| Parameter | Value | |:|:|
The final `azureMonitorWorkspaceResourceId` entry is already in the template and
On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block(shown below) in main.tf with the list of grafana integrations.
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block(shown here) in main.tf with the list of grafana integrations.
```.tf azure_monitor_workspace_integrations {
If you're using an existing Azure Managed Grafana instance that's already linked
### Download and edit the templates
-If you are deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow the steps below.
+If you're deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow these steps:
-1. Please download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357).
+1. Download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357).
2. Edit the variables in variables.tf file with the correct parameter values. 3. Run `terraform init -upgrade` to initialize the Terraform deployment. 4. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
If you are deploying a new AKS cluster using Terraform with managed Prometheus a
Note: Pass the variables for `annotations_allowed` and `labels_allowed` keys in main.tf only when those values exist. These are optional blocks.
-**NOTE**
-- Please edit the main.tf file appropriately before running the terraform template-- Please add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template otherwise the older values will get deleted and replaced with what is there in the template at the time of deployment-- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template.-- Please edit the grafanaSku parameter if you are using a non standard SKU.-- Please run this template in the Grafana Resources RG.
+> [!NOTE]
+> Edit the main.tf file appropriately before running the terraform template. Add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template. Else, older values gets deleted and replaced with what is there in the template during deployment. Users with 'User Access Administrator' role in the subscription of the AKS cluster can enable 'Monitoring Data Reader' role directly by deploying the template. Edit the grafanaSku parameter if you're using a nonstandard SKU and finally run this template in the Grafana Resource's resource group.
## [Azure Policy](#tab/azurepolicy)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+ ### Prerequisites -- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command using the Azure CLI:
+
+ `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`
+
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
### Download Azure Policy rules and parameters and deploy
-1. Download the main Azure Policy rules template from [this GitHub file](https://aka.ms/AddonPolicyMetricsProfile). Save it as **AddonPolicyMetricsProfile.rules.json**.
-1. Download the parameter file from [this GitHub file](https://aka.ms/AddonPolicyMetricsProfile.parameters). Save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
-1. Create the policy definition by using a command like: <br> `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
+1. Download the main [Azure Policy rules template](https://aka.ms/AddonPolicyMetricsProfile). Save it as **AddonPolicyMetricsProfile.rules.json**.
+1. Download the [parameter file](https://aka.ms/AddonPolicyMetricsProfile.parameters). Save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
+1. Create the policy definition using the following command:
+
+ `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
+ 1. After you create the policy definition, in the Azure portal, select **Policy** > **Definitions**. Select the policy definition you created. 1. Select **Assign**, go to the **Parameters** tab, and fill in the details. Select **Review + Create**.
-1. Now that the policy is assigned to the subscription, whenever you create a new cluster, which doesn't have Prometheus enabled, the policy runs and deploys the resources. If you want to apply the policy to an existing AKS cluster, create a **Remediation task** for that AKS cluster resource after you go to the **Policy Assignment**.
-1. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor workspace.
+1. After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring. If you want to apply the policy to an existing AKS cluster, create a **Remediation task** for that AKS cluster resource after you go to the **Policy Assignment**.
+1. Now you should see metrics flowing in the existing Azure Managed Grafana instance, which is linked with the corresponding Azure Monitor workspace.
-In case you create a new Managed Grafana resource from the Azure portal, link it with the corresponding Azure Monitor workspace from the **Linked Grafana Workspaces** tab of the relevant **Azure Monitor Workspace** page. Assign the Monitoring Data Reader role to the Grafana MSI on the Azure Monitor workspace resource so that it can read data for displaying the charts. Use the following instructions.
+Afterwards, if you create a new Managed Grafana instance, you can link it with the corresponding Azure Monitor workspace from the **Linked Grafana Workspaces** tab of the relevant **Azure Monitor Workspace** page. The `Monitoring Data Reader` role must be assigned to the managed identity of the Managed Grafana instance with the scope as the Azure Monitor workspace, so that Grafana has access to query the metrics. Use the following instructions to do so:
1. On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
In case you create a new Managed Grafana resource from the Azure portal, link it
Deploy the template with the parameter file by using any valid method for deploying ARM templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
-### Limitations
+### Limitations during enablement/deployment
-- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature won't work as expected.
+- Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not as expected.
- A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified.-- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the ARM template with it. Otherwise, it overwrites and removes the existing integrations from the Grafana workspace.
+- You must get the existing Azure Monitor workspace integrations for a Grafana intance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations.
## Enable Windows metrics collection
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
``` 1. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster. Set the `windowsexporter` and `windowskubeproxy` Booleans to `true`. For more information, see [Metrics add-on settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-add-on-settings-configmap).
-1. Enable the recording rules required for the default dashboards:
-
- * For the CLI, include the option `--enable-windows-recording-rules`.
- * For an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+1. Enable the recording rules that are required for the out-of-the-box dashboards:
- If the cluster is already onboarded to Azure Monitor metrics, to enable Windows recording rule groups, use this [ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [parameters](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) file to create the rule groups.
+ * If onboarding using the CLI, include the option `--enable-windows-recording-rules`.
+ * If onboarding using an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+ * If the cluster is already onboarded, use [this ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [this parameter file](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) to create the rule groups.
## Verify deployment
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
kubectl get ds ama-metrics-node --namespace=kube-system ```
- The number of pods should be equal to the number of nodes on the cluster. The output should resemble the following example:
+ The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
``` User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
kubectl get ds ama-metrics-win-node --namespace=kube-system ```
- The output should resemble the following example:
+ The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
``` User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
ama-metrics-win-node 3 3 3 3 3 <none> 10h ```
-1. Run the following command to verify that the ReplicaSets were deployed properly:
+1. Run the following command to verify that the two ReplicaSets were deployed properly:
``` kubectl get rs --namespace=kube-system
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
ama-metrics-5c974985b8 1 1 1 11h ama-metrics-ksm-5fcf8dffcd 1 1 1 11h ```
+## Artifacts/Resources provisioned/created as a result of metrics addon enablement for an AKS cluster
+
+When you enable metrics addon, the following resources are provisioned:
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+ |:|:|:|:|:|
+ | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection Rule** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection rule is for prometheus metrics collection by metrics addon, which has the chosen Azure monitor workspace as destination, and also it is associated to the AKS cluster resource |
+ | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection endpoint** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection endpoint is used by the above data collection rule for ingesting Prometheus metrics from the metrics addon|
+
+When you create a new Azure Monitor workspace, the following additional resources are created as part of it
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+ |:|:|:|:|:|
+ | `<azuremonitor-workspace-name>` | **System Data Collection Rule** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection rule that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
+ | `<azuremonitor-workspace-name>` | **System Data Collection endpoint** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection endpoint that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
+
+
+## HTTP Proxy
-## Feature support
+Azure Monitor metrics addon supports HTTP Proxy and uses the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
-- ARM64 and Mariner nodes are supported.-- HTTP Proxy is supported and uses the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
+## Network firewall requirements
-## Limitations
+**Azure public cloud**
-- CPU and Memory requests and limits can't be changed for the Container insights metrics add-on. If changed, they're reconciled and replaced by original values in a few seconds.
+The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure Public cloud. All network traffic from the agent is outbound to Azure Monitor.
-- Azure Monitor Private Link isn't currently supported.-- Only public clouds are currently supported.
+|Agent resource| Purpose | Port |
+|--|||
+| `global.handler.control.monitor.azure.com` | Access control service/ Azure Monitor control plane service | 443 |
+| `*.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
+| `*.handler.control.monitor.azure.com` | For querying data collection rules | 443 |
+
+**Azure US Government cloud**
+
+The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure US Government cloud. All network traffic from the agent is outbound to Azure Monitor.
+
+|Agent resource| Purpose | Port |
+|--|||
+| `global.handler.control.monitor.azure.us` | Access control service/ Azure Monitor control plane service | 443 |
+| `*.ingest.monitor.azure.us` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
+| `*.handler.control.monitor.azure.us` | For querying data collection rules | 443 |
## Uninstall the metrics add-on Currently, the Azure CLI is the only option to remove the metrics add-on and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
Currently, the Azure CLI is the only option to remove the metrics add-on and sto
az extension add --name aks-preview ```
-1. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster, along with the DCRA that links the data collection endpoint or data collection rule with your cluster. This action doesn't remove the data collection endpoint, data collection rule, or the data already collected and stored in your Azure Monitor workspace.
+2. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for that cluster. This will also delete the data collection endpoint (DCE), data collection dule (DCR) and DCRA that links the data collection rule with the cluster. This action doesn't remove any existing data stored in your Azure Monitor workspace.
```azurecli az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> ```
-## Region mappings
-When you allow a default Azure Monitor workspace to be created when you install the metrics add-on, it's created in the region listed in the following table.
-
-| AKS cluster region | Azure Monitor workspace region |
-|--||
-|australiacentral |eastus|
-|australiacentral2 |eastus|
-|australiaeast |eastus|
-|australiasoutheast |eastus|
-|brazilsouth |eastus|
-|canadacentral |eastus|
-|canadaeast |eastus|
-|centralus |centralus|
-|centralindia |centralindia|
-|eastasia |westeurope|
-|eastus |eastus|
-|eastus2 |eastus2|
-|francecentral |westeurope|
-|francesouth |westeurope|
-|japaneast |eastus|
-|japanwest |eastus|
-|koreacentral |westeurope|
-|koreasouth |westeurope|
-|northcentralus |eastus|
-|northeurope |westeurope|
-|southafricanorth |westeurope|
-|southafricawest |westeurope|
-|southcentralus |eastus|
-|southeastasia |westeurope|
-|southindia |centralindia|
-|uksouth |westeurope|
-|ukwest |westeurope|
-|westcentralus |eastus|
-|westeurope |westeurope|
-|westindia |centralindia|
-|westus |westus|
-|westus2 |westus2|
-|westus3 |westus|
-|norwayeast |westeurope|
-|norwaywest |westeurope|
-|switzerlandnorth |westeurope|
-|switzerlandwest |westeurope|
-|uaenorth |westeurope|
-|germanywestcentral |westeurope|
-|germanynorth |westeurope|
-|uaecentral |westeurope|
-|eastus2euap |eastus2euap|
-|centraluseuap |westeurope|
-|brazilsoutheast |eastus|
-|jioindiacentral |centralindia|
-|swedencentral |westeurope|
-|swedensouth |westeurope|
-|qatarcentral |westeurope|
+## Supported regions
+
+The list of regions Azure Monitor Metrics and Azure Monitor Workspace is supported in can be found [here](https://aka.ms/ama-metrics-supported-regions) under the Managed Prometheus tag.
## Next steps
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-multiple-workspaces.md
# Send Prometheus metrics to multiple Azure Monitor workspaces (preview)
-Routing metrics to more Azure Monitor Workspaces can be done through the creation of additional data collection rules. All metrics can be sent to all workspaces or different metrics can be sent to different workspaces.
+Routing metrics to more Azure Monitor workspaces can be done through the creation of additional data collection rules. All metrics can be sent to all workspaces or different metrics can be sent to different workspaces.
## Send same metrics to multiple Azure Monitor workspaces
-You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs and DCEs(if applicable) for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor Workspace and add an additional Azure Monitor workspace integration for Grafana.
+You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs and DCEs (if applicable) for your additional Azure Monitor workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor workspace and add an additional Azure Monitor workspace integration for Grafana.
- Add the following parameters: ```json
You can create multiple Data Collection Rules that point to the same Data Collec
``` ## Send different metrics to different Azure Monitor workspaces
-If you want to send some metrics to one Azure Monitor Workspace and other metrics to a different one, follow the above steps to add additional DCRs. The value of `microsoft_metrics_include_label` under the `labelIncludeFilter` in the DCR is the identifier for the workspace. To then configure which metrics are routed to which workspace, you can add an extra pre-defined label, `microsoft_metrics_account` to the metrics. The value should be the same as the corresponding `microsoft_metrics_include_label` in the DCR for that workspace. To add the label to the metrics, you can utilize `relabel_configs` in your scrape config. To send all metrics from one job to a certain workspace, add the following relabel config:
+If you want to send some metrics to one Azure Monitor workspace and other metrics to a different one, follow the above steps to add additional DCRs. The value of `microsoft_metrics_include_label` under the `labelIncludeFilter` in the DCR is the identifier for the workspace. To then configure which metrics are routed to which workspace, you can add an extra pre-defined label, `microsoft_metrics_account` to the metrics. The value should be the same as the corresponding `microsoft_metrics_include_label` in the DCR for that workspace. To add the label to the metrics, you can utilize `relabel_configs` in your scrape config. To send all metrics from one job to a certain workspace, add the following relabel config:
```yaml relabel_configs:
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus allows you to collect and analyze m
> Azure Monitor managed service for Prometheus is intended for storing information about service health of customer machines and applications. It is not intended for storing any data classified as Personal Identifiable Information (PII) or End User Identifiable Information (EUII). We strongly recommend that you do not send any sensitive information (usernames, credit card numbers etc.) into Azure Monitor managed service for Prometheus fields like metric names, label names, or label values. ## Data sources
-Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources.
+Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources:
- Azure Kubernetes service (AKS) - Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw).
Azure Monitor managed service for Prometheus supports recording rules and alert
Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](../alerts/action-groups.md) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types.
-## Limitations
-See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor workspaces.
+## Service limits & quotas
+
+See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for service limits & quotas for Azure Monitor Managed service for Prometheus.
+
+## Limitations/Known issues - Azure Monitor managed Service for Prometheus
-- Azure monitor managed service for Prometheus is only supported in public clouds.-- Private Links aren't supported for data collection into Azure monitor workspace.-- Metrics addon doesn't work on AKS clusters configured with HTTP proxy. - Scraping and storing metrics at frequencies less than 1 second isn't supported.
+- Metrics will same label names but different casing will be rejected by ingestion (ex;- `diskSize(cluster="eastus", node="node1", filesystem="usr_mnt", FileSystem="usr_opt")` is invalid due to `filesystem` and `FileSystem` labels and will be rejected )
+- Azure China cloud and Air gapped clouds are not supported for Azure Monitor managed service for Prometheus
+- To monitor Windows nodes & pods in your cluster(s), please follow steps outlined [here](./prometheus-metrics-enable.md#enable-windows-metrics-collection)
+- Azure Managed Grafana is not available in the Azure US Government cloud currently
+- Usage metrics (metrics under `Metrics` menu for the Azure Monitor workspace) - Ingestion quota limits and current usage for any Azure monitor Workspace are not available yet in US Government cloud
## Prometheus references Following are links to Prometheus documentation.
Following are links to Prometheus documentation.
## Next steps - [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md).-- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration-minimal.md
Title: Minimal Prometheus ingestion profile in Azure Monitor (preview)
-description: Describes how the setting for minimal ingestion profile for Prometheus metrics in Azure Monitor is configured and how you can modify it to collect additional data.
+description: Describes minimal ingestion profile in Azure Monitor managed service for Prometheus and how you can configure it collect more data.
Last updated 09/28/2022 ++ # Minimal ingestion profile for Prometheus metrics in Azure Monitor (preview)
-When Prometheus metric scraping is enabled for a cluster in Container insights, it collects a minimal amount of data by default. This helps reduce ingestion volume of series/metrics used by default dashboards, default recording rules & default alerts. This article describes how this setting is configured and how you can modify it to collect additional data.
+Azure monitor metrics addon collects number of Prometheus metrics by default. `Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. This article describes how this setting is configured. This article also lists metrics collected by default when `minimal ingestion profile` is enabled. You can modify collection to enable collecting more metrics, as specified below.
+
+> [!NOTE]
+> For addon based collection, `Minimal ingestion profile` setting is enabled by default.
+
+Following targets are **enabled/ON** by default - meaning you don't have to provide any scrape job configuration for scraping these targets, as metrics addon will scrape these targets automatically by default
+
+- `cadvisor` (`job=cadvisor`)
+- `nodeexporter` (`job=node`)
+- `kubelet` (`job=kubelet`)
+- `kube-state-metrics` (`job=kube-state-metrics`)
+
+Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they're disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
+
+- `core-dns` (`job=kube-dns`)
+- `kube-proxy` (`job=kube-proxy`)
+- `api-server` (`job=kube-apiserver`)
+
+> [!NOTE]
+> The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it per target using the [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-scrape-interval-settings` section.
+> You can read more about four different configmaps used by metrics addon [here](prometheus-metrics-scrape-configuration.md)
## Configuration setting
-The setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"` is enabled by default on the metrics addon. You can specify this setting in [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under default-targets-metrics-keep-list section.
+The setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"` is enabled by default on the metrics addon. You can specify this setting in [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-metrics-keep-list` section.
## Scenarios There are four scenarios where you may want to customize this behavior: **Ingest only minimal metrics per default target.**<br>
-This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only series and metrics listed below will be ingested for each of the default targets.
+This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed below are ingested for each of the default targets.
-**Ingest a few additional metrics for one or more default targets in addition to minimal metrics.**<br>
-Keep ` minimalIngestionProfile="true"` and specify the appropriate `keeplistRegexes.*` specific to the target, for example `keeplistRegexes.coreDns="X|Y"`. X,Y will be merged with default metric list for the target and then ingested. |
+**Ingest a few other metrics for one or more default targets in addition to minimal metrics.**<br>
+Keep ` minimalIngestionProfile="true"` and specify the appropriate `keeplistRegexes.*` specific to the target, for example `keeplistRegexes.coreDns="X``Y"`. X,Y is merged with default metric list for the target and then ingested. ``
**Ingest only a specific set of metrics for a target, and nothing else.**<br>
-Set `minimalIngestionProfile="false"` and specify the appropriate `default-targets-metrics-keep-list.="X|Y"` specific to the target in the `ama-metrics-settings-configmap`.
+Set `minimalIngestionProfile="false"` and specify the appropriate `default-targets-metrics-keep-list.="X``Y"` specific to the target in the `ama-metrics-settings-configmap`.
**Ingest all metrics scraped for the default target.**<br>
-Set `minimalIngestionProfile="false"` and don't specify any `default-targets-metrics-keep-list.<targetname>` for that target. This can increase metric ingestion volume by a factor per target.
+Set `minimalIngestionProfile="false"` and don't specify any `default-targets-metrics-keep-list.<targetname>` for that target. Changing to `false` can increase metric ingestion volume by a factor per target.
> [!NOTE]
-> `up` metric is not part of the allow/keep list because it will be ingested per scrape, per target, regardless of `keepLists` specified. This metric is not actually scraped but produced as result of scrape by the metrics addon. For histograms and summaries, each series has to be included explicitly in the list (`*bucket`, `*sum`, `*count`).
+> `up` metric is not part of the allow/keep list because it is ingested per scrape, per target, regardless of `keepLists` specified. This metric is not actually scraped but produced as result of scrape operation by the metrics addon. For histograms and summaries, each series has to be included explicitly in the list (`*bucket`, `*sum`, `*count` series).
+
+### Minimal ingestion for default ON targets
+The following metrics are allow-listed with `minimalingestionprofile=true` for default ON targets. The below metrics are collected by default as these targets are scraped by default.
+
+**kubelet**<br>
+- `kubelet_volume_stats_used_bytes`
+- `kubelet_node_name`
+- `kubelet_running_pods`
+- `kubelet_running_pod_count`
+- `kubelet_running_containers`
+- `kubelet_running_container_count`
+- `volume_manager_total_volumes`
+- `kubelet_node_config_error`
+- `kubelet_runtime_operations_total`
+- `kubelet_runtime_operations_errors_total`
+- `kubelet_runtime_operations_duration_seconds` `kubelet_runtime_operations_duration_seconds_bucket` `kubelet_runtime_operations_duration_seconds_sum` `kubelet_runtime_operations_duration_seconds_count`
+- `kubelet_pod_start_duration_seconds` `kubelet_pod_start_duration_seconds_bucket` `kubelet_pod_start_duration_seconds_sum` `kubelet_pod_start_duration_seconds_count`
+- `kubelet_pod_worker_duration_seconds` `kubelet_pod_worker_duration_seconds_bucket` `kubelet_pod_worker_duration_seconds_sum` `kubelet_pod_worker_duration_seconds_count`
+- `storage_operation_duration_seconds` `storage_operation_duration_seconds_bucket` `storage_operation_duration_seconds_sum` `storage_operation_duration_seconds_count`
+- `storage_operation_errors_total`
+- `kubelet_cgroup_manager_duration_seconds` `kubelet_cgroup_manager_duration_seconds_bucket` `kubelet_cgroup_manager_duration_seconds_sum` `kubelet_cgroup_manager_duration_seconds_count`
+- `kubelet_pleg_relist_duration_seconds` `kubelet_pleg_relist_duration_seconds_bucket` `kubelet_pleg_relist_duration_sum` `kubelet_pleg_relist_duration_seconds_count`
+- `kubelet_pleg_relist_interval_seconds` `kubelet_pleg_relist_interval_seconds_bucket` `kubelet_pleg_relist_interval_seconds_sum` `kubelet_pleg_relist_interval_seconds_count`
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubelet_volume_stats_capacity_bytes`
+- `kubelet_volume_stats_available_bytes`
+- `kubelet_volume_stats_inodes_used`
+- `kubelet_volume_stats_inodes`
+- `kubernetes_build_info"`
-### Minimal ingestion for ON targets
-The following metrics are allow-listed with `minimalingestionprofile=true` for default ON targets. These metrics will be collected by default as these targets are scraped by default.
+**cadvisor**<br>
+- `container_spec_cpu_period`
+- `container_spec_cpu_quota`
+- `container_cpu_usage_seconds_total`
+- `container_memory_rss`
+- `container_network_receive_bytes_total`
+- `container_network_transmit_bytes_total`
+- `container_network_receive_packets_total`
+- `container_network_transmit_packets_total`
+- `container_network_receive_packets_dropped_total`
+- `container_network_transmit_packets_dropped_total`
+- `container_fs_reads_total`
+- `container_fs_writes_total`
+- `container_fs_reads_bytes_total`
+- `container_fs_writes_bytes_total`
+- `container_memory_working_set_bytes`
+- `container_memory_cache`
+- `container_memory_swap`
+- `container_cpu_cfs_throttled_periods_total`
+- `container_cpu_cfs_periods_total`
+- `container_memory_usage_bytes`
+- `kubernetes_build_info"`
-`default-targets-metrics-keep-list.kubelet` = `"kubelet_volume_stats_used_bytes|kubelet_node_name|kubelet_running_pods|kubelet_running_pod_count|kubelet_running_containers|kubelet_running_container_count|volume_manager_total_volumes|kubelet_node_config_error|kubelet_runtime_operations_total|kubelet_runtime_operations_errors_total|kubelet_runtime_operations_duration_seconds|kubelet_runtime_operations_duration_seconds_bucket|kubelet_runtime_operations_duration_seconds_sum|kubelet_runtime_operations_duration_seconds_count|kubelet_pod_start_duration_seconds|kubelet_pod_start_duration_seconds_bucket|kubelet_pod_start_duration_seconds_sum|kubelet_pod_start_duration_seconds_count|kubelet_pod_worker_duration_seconds|kubelet_pod_worker_duration_seconds_bucket|kubelet_pod_worker_duration_seconds_sum|kubelet_pod_worker_duration_seconds_count|storage_operation_duration_seconds|storage_operation_duration_seconds_bucket|storage_operation_duration_seconds_sum|storage_operation_duration_seconds_count|storage_operation_errors_total|kubelet_cgroup_manager_duration_seconds|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_cgroup_manager_duration_seconds_sum|kubelet_cgroup_manager_duration_seconds_count|kubelet_pleg_relist_duration_seconds|kubelet_pleg_relist_duration_seconds_bucket|kubelet_pleg_relist_duration_sum|kubelet_pleg_relist_duration_seconds_count|kubelet_pleg_relist_interval_seconds|kubelet_pleg_relist_interval_seconds_bucket|kubelet_pleg_relist_interval_seconds_sum|kubelet_pleg_relist_interval_seconds_count|rest_client_requests_total|rest_client_request_duration_seconds|rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubelet_volume_stats_capacity_bytes|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes_used|kubelet_volume_stats_inodes|kubernetes_build_info"`
+**kube-state-metrics**<br>
+- `kube_node_status_capacity`
+- `kube_job_status_succeeded`
+- `kube_job_spec_completions`
+- `kube_daemonset_status_desired_number_scheduled`
+- `kube_daemonset_status_number_ready`
+- `kube_deployment_spec_replicas`
+- `kube_deployment_status_replicas_ready`
+- `kube_pod_container_status_last_terminated_reason`
+- `kube_node_status_condition`
+- `kube_pod_container_status_restarts_total`
+- `kube_pod_container_resource_requests`
+- `kube_pod_status_phase`
+- `kube_pod_container_resource_limits`
+- `kube_node_status_allocatable`
+- `kube_pod_info`
+- `kube_pod_owner`
+- `kube_resourcequota`
+- `kube_statefulset_replicas`
+- `kube_statefulset_status_replicas`
+- `kube_statefulset_status_replicas_ready`
+- `kube_statefulset_status_replicas_current`
+- `kube_statefulset_status_replicas_updated`
+- `kube_namespace_status_phase`
+- `kube_node_info`
+- `kube_statefulset_metadata_generation`
+- `kube_pod_labels`
+- `kube_pod_annotations`
+- `kube_horizontalpodautoscaler_status_current_replicas`
+- `kube_horizontalpodautoscaler_status_desired_replicas`
+- `kube_horizontalpodautoscaler_spec_min_replicas`
+- `kube_horizontalpodautoscaler_spec_max_replicas`
+- `kube_node_status_condition`
+- `kube_node_spec_taint`
+- `kube_pod_container_status_waiting_reason`
+- `kube_job_failed`
+- `kube_job_status_start_time`
+- `kube_deployment_spec_replicas`
+- `kube_deployment_status_replicas_available`
+- `kube_deployment_status_replicas_updated`
+- `kube_job_status_active`
+- `kubernetes_build_info`
+- `kube_pod_container_info`
+- `kube_replicaset_owner`
-`default-targets-metrics-keep-list.cadvisor` = `"container_spec_cpu_period|container_spec_cpu_quota|container_cpu_usage_seconds_total|container_memory_rss|container_network_receive_bytes_total|container_network_transmit_bytes_total|container_network_receive_packets_total|container_network_transmit_packets_total|container_network_receive_packets_dropped_total|container_network_transmit_packets_dropped_total|container_fs_reads_total|container_fs_writes_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_memory_cache|container_memory_swap|container_cpu_cfs_throttled_periods_total|container_cpu_cfs_periods_total|container_memory_usage_bytes|kubernetes_build_info"`
+**node-exporter (linux)**<br>
+- `node_cpu_seconds_total`
+- `node_memory_MemAvailable_bytes`
+- `node_memory_Buffers_bytes`
+- `node_memory_Cached_bytes`
+- `node_memory_MemFree_bytes`
+- `node_memory_Slab_bytes`
+- `node_memory_MemTotal_bytes`
+- `node_netstat_Tcp_RetransSegs`
+- `node_netstat_Tcp_OutSegs`
+- `node_netstat_TcpExt_TCPSynRetrans`
+- `node_load1``node_load5`
+- `node_load15`
+- `node_disk_read_bytes_total`
+- `node_disk_written_bytes_total`
+- `node_disk_io_time_seconds_total`
+- `node_filesystem_size_bytes`
+- `node_filesystem_avail_bytes`
+- `node_filesystem_readonly`
+- `node_network_receive_bytes_total`
+- `node_network_transmit_bytes_total`
+- `node_vmstat_pgmajfault`
+- `node_network_receive_drop_total`
+- `node_network_transmit_drop_total`
+- `node_disk_io_time_weighted_seconds_total`
+- `node_exporter_build_info`
+- `node_time_seconds`
+- `node_uname_info"`
-`default-targets-metrics-keep-list.kubestate` = `"kube_node_status_capacity|kube_job_status_succeeded|kube_job_spec_completions|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_number_ready|kube_deployment_spec_replicas|kube_deployment_status_replicas_ready|kube_pod_container_status_last_terminated_reason|kube_node_status_condition|kube_pod_container_status_restarts_total|kube_pod_container_resource_requests|kube_pod_status_phase|kube_pod_container_resource_limits|kube_node_status_allocatable|kube_pod_info|kube_pod_owner|kube_resourcequota|kube_statefulset_replicas|kube_statefulset_status_replicas|kube_statefulset_status_replicas_ready|kube_statefulset_status_replicas_current|kube_statefulset_status_replicas_updated|kube_namespace_status_phase|kube_node_info|kube_statefulset_metadata_generation|kube_pod_labels|kube_pod_annotations|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_spec_max_replicas|kube_node_status_condition|kube_node_spec_taint|kube_pod_container_status_waiting_reason|kube_job_failed|kube_job_status_start_time|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_replicaset_owner|kubernetes_build_info"`
+### Minimal ingestion for default OFF targets
+The following are metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics are not collected by default as these targets are not scraped by default (due to being OFF by default). You can turn ON scraping for these targets using `default-scrape-settings-enabled.<target-name>=true`' using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section.
-`default-targets-metrics-keep-list.nodeexporter` = `"node_cpu_seconds_total|node_memory_MemAvailable_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_MemFree_bytes|node_memory_Slab_bytes|node_memory_MemTotal_bytes|node_netstat_Tcp_RetransSegs|node_netstat_Tcp_OutSegs|node_netstat_TcpExt_TCPSynRetrans|node_load1|node_load5|node_load15|node_disk_read_bytes_total|node_disk_written_bytes_total|node_disk_io_time_seconds_total|node_filesystem_size_bytes|node_filesystem_avail_bytes|node_network_receive_bytes_total|node_network_transmit_bytes_total|node_vmstat_pgmajfault|node_network_receive_drop_total|node_network_transmit_drop_total|node_disk_io_time_weighted_seconds_total|node_exporter_build_info|node_time_seconds|node_uname_info"`
+**core-dns**
+- `coredns_build_info`
+- `coredns_panics_total`
+- `coredns_dns_responses_total`
+- `coredns_forward_responses_total`
+- `coredns_dns_request_duration_seconds` `coredns_dns_request_duration_seconds_bucket` `coredns_dns_request_duration_seconds_sum` `coredns_dns_request_duration_seconds_count`
+- `coredns_forward_request_duration_seconds` `coredns_forward_request_duration_seconds_bucket` `coredns_forward_request_duration_seconds_sum` `coredns_forward_request_duration_seconds_count`
+- `coredns_dns_requests_total`
+- `coredns_forward_requests_total`
+- `coredns_cache_hits_total`
+- `coredns_cache_misses_total`
+- `coredns_cache_entries`
+- `coredns_plugin_enabled`
+- `coredns_dns_request_size_bytes` `coredns_dns_request_size_bytes_bucket` `coredns_dns_request_size_bytes_sum` `coredns_dns_request_size_bytes_count`
+- `coredns_dns_response_size_bytes` `coredns_dns_response_size_bytes_bucket` `coredns_dns_response_size_bytes_sum` `coredns_dns_response_size_bytes_count`
+- `coredns_dns_response_size_bytes` `coredns_dns_response_size_bytes_bucket` `coredns_dns_response_size_bytes_sum` `coredns_dns_response_size_bytes_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
-### Minimal ingestion for OFF targets
-The following metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics won't be collected by default as these targets aren't scraped by default. You turn them on using `default-targets-metrics-keep-list.<target-name>=true`'.
+**kube-proxy**
+- `kubeproxy_sync_proxy_rules_duration_seconds` `kubeproxy_sync_proxy_rules_duration_seconds_bucket` `kubeproxy_sync_proxy_rules_duration_seconds_sum` `kubeproxy_sync_proxy_rules_duration_seconds_count` `kubeproxy_network_programming_duration_seconds`
+- `kubeproxy_network_programming_duration_seconds` `kubeproxy_network_programming_duration_seconds_bucket` `kubeproxy_network_programming_duration_seconds_sum` `kubeproxy_network_programming_duration_seconds_count` `rest_client_requests_total`
+- `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
-`default-targets-metrics-keep-list.coredns` = `"coredns_build_info|coredns_panics_total|coredns_dns_responses_total|coredns_forward_responses_total|coredns_dns_request_duration_seconds|coredns_dns_request_duration_seconds_bucket|coredns_dns_request_duration_seconds_sum|coredns_dns_request_duration_seconds_count|coredns_forward_request_duration_seconds|coredns_forward_request_duration_seconds_bucket|coredns_forward_request_duration_seconds_sum|coredns_forward_request_duration_seconds_count|coredns_dns_requests_total|coredns_forward_requests_total|coredns_cache_hits_total|coredns_cache_misses_total|coredns_cache_entries|coredns_plugin_enabled|coredns_dns_request_size_bytes|coredns_dns_request_size_bytes_bucket|coredns_dns_request_size_bytes_sum|coredns_dns_request_size_bytes_count|coredns_dns_response_size_bytes|coredns_dns_response_size_bytes_bucket|coredns_dns_response_size_bytes_sum|coredns_dns_response_size_bytes_count|coredns_dns_response_size_bytes_bucket|coredns_dns_response_size_bytes_sum|coredns_dns_response_size_bytes_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+**api-server**
+- `apiserver_request_duration_seconds` `apiserver_request_duration_seconds_bucket` `apiserver_request_duration_seconds_sum` `apiserver_request_duration_seconds_count`
+- `apiserver_request_total`
+- `workqueue_adds_total``workqueue_depth`
+- `workqueue_queue_duration_seconds` `workqueue_queue_duration_seconds_bucket` `workqueue_queue_duration_seconds_sum` `workqueue_queue_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
-`default-targets-metrics-keep-list.kubeproxy` = `"kubeproxy_sync_proxy_rules_duration_seconds|kubeproxy_sync_proxy_rules_duration_seconds_bucket|kubeproxy_sync_proxy_rules_duration_seconds_sum|kubeproxy_sync_proxy_rules_duration_seconds_count|kubeproxy_network_programming_duration_seconds|kubeproxy_network_programming_duration_seconds_bucket|kubeproxy_network_programming_duration_seconds_sum|kubeproxy_network_programming_duration_seconds_count|rest_client_requests_total|rest_client_request_duration_seconds|rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+**windows-exporter (job=windows-exporter)**<br>
+- `windows_system_system_up_time`
+- `windows_cpu_time_total`
+- `windows_memory_available_bytes`
+- `windows_os_visible_memory_bytes`
+- `windows_memory_cache_bytes`
+- `windows_memory_modified_page_list_bytes`
+- `windows_memory_standby_cache_core_bytes`
+- `windows_memory_standby_cache_normal_priority_bytes`
+- `windows_memory_standby_cache_reserve_bytes`
+- `windows_memory_swap_page_operations_total`
+- `windows_logical_disk_read_seconds_total`
+- `windows_logical_disk_write_seconds_total`
+- `windows_logical_disk_size_bytes`
+- `windows_logical_disk_free_bytes`
+- `windows_net_bytes_total`
+- `windows_net_packets_received_discarded_total`
+- `windows_net_packets_outbound_discarded_total`
+- `windows_container_available`
+- `windows_container_cpu_usage_seconds_total`
+- `windows_container_memory_usage_commit_bytes`
+- `windows_container_memory_usage_private_working_set_bytes`
+- `windows_container_network_receive_bytes_total`
+- `windows_container_network_transmit_bytes_total`
-`default-targets-metrics-keep-list.apiserver` = `"apiserver_request_duration_seconds|apiserver_request_duration_seconds_bucket|apiserver_request_duration_seconds_sum|apiserver_request_duration_seconds_count|apiserver_request_total|workqueue_adds_total|workqueue_depth|workqueue_queue_duration_seconds|workqueue_queue_duration_seconds_bucket|workqueue_queue_duration_seconds_sum|workqueue_queue_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+**kube-proxy-windows (job=kube-proxy-windows)**<br>
+- `kubeproxy_sync_proxy_rules_duration_seconds`
+- `kubeproxy_sync_proxy_rules_duration_seconds_bucket`
+- `kubeproxy_sync_proxy_rules_duration_seconds_sum`
+- `kubeproxy_sync_proxy_rules_duration_seconds_count`
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds`
+- `rest_client_request_duration_seconds_bucket`
+- `rest_client_request_duration_seconds_sum`
+- `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
## Next steps
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
Last updated 09/28/2022
-# Customize scraping of Prometheus metrics in Azure Monitor (preview)
+# Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus (preview)
-This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics add-on](prometheus-metrics-enable.md) in Azure Monitor.
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor.
## Configmaps
-Three different configmaps can be configured to change the default settings of the metrics add-on:
--- `ama-metrics-settings-configmap`-- `ama-metrics-prometheus-config`-- `ama-metrics-prometheus-config-node`
+Four different configmaps can be configured to provide scrape configuration and other settings for the metrics add-on. All config-maps should be applied to `kube-system` namespace for any cluster.
+
+1. [`ama-metrics-settings-configmap`](https://aka.ms/azureprometheus-addon-settings-configmap)
+ This config map has below simple settings that can be configured. You can take the configmap from the above git hub repo, change the settings are required and apply/deploy the configmap to `kube-system` namespace for your cluster
+ * cluster alias (to change the value of `cluster` label in every time-series/metric that's ingested from a cluster)
+ * enable/disable default scrape targets - Turn ON/OFF default scraping based on targets. Scrape configuration for these default targets are already pre-defined/built-in
+ * enable pod annotation based scraping per namespace
+ * metric keep-lists - this setting is used to control which metrics are listed to be allowed from each default target and to change the default behavior
+ * scrape intervals for default/pre-definetargets. `30 secs` is the default scrape frequency and it can be changed per default target using this configmap
+ * debug-mode - turning this ON helps to debug missing metric/ingestion issues - see more on [troubleshooting](prometheus-metrics-troubleshoot.md#debug-mode)
+2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap)
+ This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster.
+3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap)
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
+4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows)
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
## Metrics add-on settings configmap
-The [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics add-on.
+The [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics add-on.
### Enable and disable default targets
-The following table has a list of all the default targets that the Azure Monitor metrics add-on can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds.
-
-| Key | Type | Enabled | Description |
-|--||-|-|
-| kubelet | bool | `true` | Scrape kubelet in every node in the K8s cluster without any extra scrape config. |
-| cadvisor | bool | `true` | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. |
-| kubestate | bool | `true` | Scrape kube-state-metrics in the K8s cluster (installed as a part of the add-on) without any extra scrape config. |
-| nodeexporter | bool | `true` | Scrape node metrics without any extra scrape config.<br>Linux only. |
-| coredns | bool | `false` | Scrape coredns service in the K8s cluster without any extra scrape config. |
-| kubeproxy | bool | `false` | Scrape kube-proxy in every Linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
-| apiserver | bool | `false` | Scrape the Kubernetes API server in the K8s cluster without any extra scrape config. |
-| windowsexporter | bool | `false` | Scrape windows-exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
-| windowskubeproxy | bool | `false` | Scrape windows-kube-proxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
-| prometheuscollectorhealth | bool | `false` | Scrape information about the prometheus-collector container, such as the amount and size of time series scraped. |
+The following table has a list of all the default targets that the Azure Monitor metrics add-on can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds. A replica is deployed to scrape cluster-wide targets such as kube-state-metrics. A DaemonSet is also deployed to scrape node-wide targets such as kubelet.
+
+| Key | Type | Enabled | Pod | Description |
+|--||-|-|-|
+| kubelet | bool | `true` | Linux DaemonSet | Scrape kubelet in every node in the K8s cluster without any extra scrape config. |
+| cadvisor | bool | `true` | Linux daemosnet | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. |
+| kubestate | bool | `true` | Linux replica | Scrape kube-state-metrics in the K8s cluster (installed as a part of the add-on) without any extra scrape config. |
+| nodeexporter | bool | `true` | Linux DaemonSet | Scrape node metrics without any extra scrape config.<br>Linux only. |
+| coredns | bool | `false` | Linux replica | Scrape coredns service in the K8s cluster without any extra scrape config. |
+| kubeproxy | bool | `false` | Linux DaemonSet | Scrape kube-proxy in every Linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
+| apiserver | bool | `false` | Linux replica | Scrape the Kubernetes API server in the K8s cluster without any extra scrape config. |
+| windowsexporter | bool | `false` | Windows DaemonSet | Scrape windows-exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| windowskubeproxy | bool | `false` | Windows DaemonSet | Scrape windows-kube-proxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| prometheuscollectorhealth | bool | `false` | Linux replica | Scrape information about the prometheus-collector container, such as the amount and size of time series scraped. |
If you want to turn on the scraping of the default targets that aren't enabled by default, edit the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap` to update the targets listed under `default-scrape-settings-enabled` to `true`. Apply the configmap to your cluster. ### Customize metrics collected by default targets
-By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, in the configmap under `default-targets-metrics-keep-list`, set `minimalingestionprofile` to `false`.
+By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, update the krrp-lists in the settings configmap under `default-targets-metrics-keep-list`, and set `minimalingestionprofile` to `false`.
-To filter in more metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you want to change.
+To allowlist more metrics in addition to default metrics that are listed to be allowed, for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you want to change.
For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following script to filter *in* metrics collected for the default targets by using regex-based filtering.
apiserver = "mymetric.*"
To further customize the default jobs to change properties like collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`. Then apply the job by using a custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs). ### Cluster alias
-The cluster label appended to every time series scraped uses the last part of the full AKS cluster's Azure Resource Manager resource ID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
+The cluster label appended to every time series scraped uses the last part of the full AKS cluster's Azure Resource Manager resource ID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/myclustername`, the cluster label is `myclustername`.
-To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one.
+To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can create this configmap if it doesn't exist in the cluster or you can edit the existing one if its already exists in your cluster.
The new label also shows up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
The new label also shows up in the cluster parameter dropdown in the Grafana das
To view every metric that's being scraped for debugging purposes, the metrics add-on agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one. For more information, see the [Debug mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode). ### Scrape interval settings
-To update the scrape interval settings for any target, you can update the duration in the setting `default-targets-scrape-interval-settings` for that target in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You have to set the scrape intervals in the correct format specified in [this website](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Otherwise, the default value of 30 seconds is applied to the corresponding targets.
+To update the scrape interval settings for any target, you can update the duration in the setting `default-targets-scrape-interval-settings` for that target in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You have to set the scrape intervals in the correct format specified in [this website](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Otherwise, the default value of 30 seconds is applied to the corresponding targets. For example - If you want to update the scrape interval for the `kubelet` job to `60s` then you can update the following section in the YAML:
+
+```
+default-targets-scrape-interval-settings: |-
+ kubelet = "60s"
+ coredns = "30s"
+ cadvisor = "30s"
+ kubeproxy = "30s"
+ apiserver = "30s"
+ kubestate = "30s"
+ nodeexporter = "30s"
+ windowsexporter = "30s"
+ windowskubeproxy = "30s"
+ kappiebasic = "30s"
+ prometheuscollectorhealth = "30s"
+ podannotations = "30s"
+```
+and apply the YAML using the following command: `kubectl apply -f .\ama-metrics-settings-configmap.yaml`
## Configure custom Prometheus scrape jobs
Follow the instructions to [create, validate, and apply the configmap](prometheu
### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
-The `ama-metrics` ReplicaSet pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` ReplicaSet pod to the `ama-metrics` DaemonSet pod.
+The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
-The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
+The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
-The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
+Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
```yaml
- - job_name: node
+ - job_name: nodesample
scrape_interval: 30s scheme: http metrics_path: /metrics
scrape_configs:
Any other unsupported sections must be removed from the config before they're applied as a configmap. Otherwise, the custom configuration fails validation and isn't applied.
-See the [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the Prometheus config.
+See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-file-as-configmap) section to create a configmap from the Prometheus config.
> [!NOTE] > When custom scrape configuration fails to apply because of validation errors, default scrape configuration continues to be used.
metric_relabel_configs:
### Pod annotation-based scraping
-If you're currently using Azure Monitor Container insights Prometheus scraping with the setting `monitor_kubernetes_pods = true`, adding this job to your custom config allows you to scrape the same pods and metrics.
- The following scrape config uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations. To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the following job scrapes only the address specified by the annotation:
To scrape certain pods, specify the port, path, and scheme through annotations f
```yaml scrape_configs:
- - job_name: 'kubernetes-pods'
+ - job_name: 'kubernetespods-sample'
kubernetes_sd_configs: - role: pod
scrape_configs:
regex: __meta_kubernetes_pod_label_(.+) ```
-See the [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the Prometheus config.
+See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-file-as-configmap) section to create a configmap from the Prometheus config.
## Next steps
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md
This article lists the default targets, dashboards, and recording rules when you
The default scrape frequency for all default targets and scrapes is 30 seconds.
-## Targets scraped
+## Targets scraped by default
- `cadvisor` (`job=cadvisor`) - `nodeexporter` (`job=node`)
This article lists the default targets, dashboards, and recording rules when you
The following metrics are collected by default from each default target. All other metrics are dropped through relabeling rules. **cadvisor (job=cadvisor)**<br>
+ - `container_spec_cpu_period`
+ - `container_spec_cpu_quota`
+ - `container_cpu_usage_seconds_total`
- `container_memory_rss` - `container_network_receive_bytes_total` - `container_network_transmit_bytes_total`
The following metrics are collected by default from each default target. All oth
- `container_fs_reads_total` - `container_fs_writes_total` - `container_fs_reads_bytes_total`
- - `container_fs_writes_bytes_total`
- - `container_cpu_usage_seconds_total`
+ - `container_fs_writes_bytes_total`
+ - `container_memory_working_set_bytes`
+ - `container_memory_cache`
+ - `container_memory_swap`
+ - `container_cpu_cfs_throttled_periods_total`
+ - `container_cpu_cfs_periods_total`
+ - `container_memory_usage_bytes`
+ - `kubernetes_build_info"`
**kubelet (job=kubelet)**<br>
+ - `kubelet_volume_stats_used_bytes`
- `kubelet_node_name` - `kubelet_running_pods` - `kubelet_running_pod_count`
- - `kubelet_running_sum_containers`
+ - `kubelet_running_containers`
- `kubelet_running_container_count` - `volume_manager_total_volumes` - `kubelet_node_config_error` - `kubelet_runtime_operations_total` - `kubelet_runtime_operations_errors_total`
- - `kubelet_runtime_operations_duration_seconds_bucket`
- - `kubelet_runtime_operations_duration_seconds_sum`
- - `kubelet_runtime_operations_duration_seconds_count`
- - `kubelet_pod_start_duration_seconds_bucket`
- - `kubelet_pod_start_duration_seconds_sum`
- - `kubelet_pod_start_duration_seconds_count`
- - `kubelet_pod_worker_duration_seconds_bucket`
- - `kubelet_pod_worker_duration_seconds_sum`
- - `kubelet_pod_worker_duration_seconds_count`
- - `storage_operation_duration_seconds_bucket`
- - `storage_operation_duration_seconds_sum`
- - `storage_operation_duration_seconds_count`
+ - `kubelet_runtime_operations_duration_seconds` `kubelet_runtime_operations_duration_seconds_bucket` `kubelet_runtime_operations_duration_seconds_sum` `kubelet_runtime_operations_duration_seconds_count`
+ - `kubelet_pod_start_duration_seconds` `kubelet_pod_start_duration_seconds_bucket` `kubelet_pod_start_duration_seconds_sum` `kubelet_pod_start_duration_seconds_count`
+ - `kubelet_pod_worker_duration_seconds` `kubelet_pod_worker_duration_seconds_bucket` `kubelet_pod_worker_duration_seconds_sum` `kubelet_pod_worker_duration_seconds_count`
+ - `storage_operation_duration_seconds` `storage_operation_duration_seconds_bucket` `storage_operation_duration_seconds_sum` `storage_operation_duration_seconds_count`
- `storage_operation_errors_total`
- - `kubelet_cgroup_manager_duration_seconds_bucket`
- - `kubelet_cgroup_manager_duration_seconds_sum`
- - `kubelet_cgroup_manager_duration_seconds_count`
- - `kubelet_pleg_relist_interval_seconds_bucket`
- - `kubelet_pleg_relist_interval_seconds_count`
- - `kubelet_pleg_relist_interval_seconds_sum`
- - `kubelet_pleg_relist_duration_seconds_bucket`
- - `kubelet_pleg_relist_duration_seconds_count`
- - `kubelet_pleg_relist_duration_seconds_sum`
+ - `kubelet_cgroup_manager_duration_seconds` `kubelet_cgroup_manager_duration_seconds_bucket` `kubelet_cgroup_manager_duration_seconds_sum` `kubelet_cgroup_manager_duration_seconds_count`
+ - `kubelet_pleg_relist_duration_seconds` `kubelet_pleg_relist_duration_seconds_bucket` `kubelet_pleg_relist_duration_sum` `kubelet_pleg_relist_duration_seconds_count`
+ - `kubelet_pleg_relist_interval_seconds` `kubelet_pleg_relist_interval_seconds_bucket` `kubelet_pleg_relist_interval_seconds_sum` `kubelet_pleg_relist_interval_seconds_count`
- `rest_client_requests_total`
- - `rest_client_request_duration_seconds_bucket`
- - `rest_client_request_duration_seconds_sum`
- - `rest_client_request_duration_seconds_count`
+ - `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
- `process_resident_memory_bytes` - `process_cpu_seconds_total` - `go_goroutines`
- - `kubernetes_build_info`
+ - `kubelet_volume_stats_capacity_bytes`
+ - `kubelet_volume_stats_available_bytes`
+ - `kubelet_volume_stats_inodes_used`
+ - `kubelet_volume_stats_inodes`
+ - `kubernetes_build_info"`
**nodexporter (job=node)**<br>
- - `node_memory_MemTotal_bytes`
- `node_cpu_seconds_total` - `node_memory_MemAvailable_bytes` - `node_memory_Buffers_bytes` - `node_memory_Cached_bytes` - `node_memory_MemFree_bytes` - `node_memory_Slab_bytes`
- - `node_filesystem_avail_bytes`
+ - `node_memory_MemTotal_bytes`
+ - `node_netstat_Tcp_RetransSegs`
+ - `node_netstat_Tcp_OutSegs`
+ - `node_netstat_TcpExt_TCPSynRetrans`
+ - `node_load1``node_load5`
+ - `node_load15`
+ - `node_disk_read_bytes_total`
+ - `node_disk_written_bytes_total`
+ - `node_disk_io_time_seconds_total`
- `node_filesystem_size_bytes`
- - `node_time_seconds`
- - `node_exporter_build_info`
- - `node_load1`
- - `node_vmstat_pgmajfault`
+ - `node_filesystem_avail_bytes`
+ - `node_filesystem_readonly`
- `node_network_receive_bytes_total` - `node_network_transmit_bytes_total`
+ - `node_vmstat_pgmajfault`
- `node_network_receive_drop_total` - `node_network_transmit_drop_total`
- - `node_disk_io_time_seconds_total`
- `node_disk_io_time_weighted_seconds_total`
- - `node_load5`
- - `node_load15`
- - `node_disk_read_bytes_total`
- - `node_disk_written_bytes_total`
- - `node_uname_info`
+ - `node_exporter_build_info`
+ - `node_time_seconds`
+ - `node_uname_info"`
**kube-state-metrics (job=kube-state-metrics)**<br>
- - `kube_node_status_allocatable`
- - `kube_pod_owner`
+ - `kube_node_status_capacity`
+ - `kube_job_status_succeeded`
+ - `kube_job_spec_completions`
+ - `kube_daemonset_status_desired_number_scheduled`
+ - `kube_daemonset_status_number_ready`
+ - `kube_deployment_spec_replicas`
+ - `kube_deployment_status_replicas_ready`
+ - `kube_pod_container_status_last_terminated_reason`
+ - `kube_node_status_condition`
+ - `kube_pod_container_status_restarts_total`
- `kube_pod_container_resource_requests` - `kube_pod_status_phase` - `kube_pod_container_resource_limits`
- - `kube_pod_info|kube_replicaset_owner`
- - `kube_resourcequota`
- - `kube_namespace_status_phase`
- - `kube_node_status_capacity`
- - `kube_node_info`
+ - `kube_node_status_allocatable`
- `kube_pod_info`
- - `kube_deployment_spec_replicas`
- - `kube_deployment_status_replicas_available`
- - `kube_deployment_status_replicas_updated`
- - `kube_statefulset_status_replicas_ready`
+ - `kube_pod_owner`
+ - `kube_resourcequota`
+ - `kube_statefulset_replicas`
- `kube_statefulset_status_replicas`
+ - `kube_statefulset_status_replicas_ready`
+ - `kube_statefulset_status_replicas_current`
- `kube_statefulset_status_replicas_updated`
- - `kube_job_status_start_time`
- - `kube_job_status_active`
- - `kube_job_failed`
- - `kube_horizontalpodautoscaler_status_desired_replicas`
+ - `kube_namespace_status_phase`
+ - `kube_node_info`
+ - `kube_statefulset_metadata_generation`
+ - `kube_pod_labels`
+ - `kube_pod_annotations`
- `kube_horizontalpodautoscaler_status_current_replicas`
+ - `kube_horizontalpodautoscaler_status_desired_replicas`
- `kube_horizontalpodautoscaler_spec_min_replicas` - `kube_horizontalpodautoscaler_spec_max_replicas`
- - `kubernetes_build_info`
- `kube_node_status_condition` - `kube_node_spec_taint`
+ - `kube_pod_container_status_waiting_reason`
+ - `kube_job_failed`
+ - `kube_job_status_start_time`
+ - `kube_deployment_spec_replicas`
+ - `kube_deployment_status_replicas_available`
+ - `kube_deployment_status_replicas_updated`
+ - `kube_job_status_active`
+ - `kubernetes_build_info`
+ - `kube_pod_container_info`
+ - `kube_replicaset_owner`
-## Targets scraped for Windows
+## Default targets scraped for Windows
+Following Windows targets are configured to scrape, but scraping is not enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they are disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
Two default jobs can be run for Windows that scrape metrics required for the dashboards specific to Windows.-
-> [!NOTE]
-> This requires an update in the ama-metrics-settings-configmap and installing windows-exporter on all Windows node pools. For more information, see the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection).
- - `windows-exporter` (`job=windows-exporter`) - `kube-proxy-windows` (`job=kube-proxy-windows`)
+> [!NOTE]
+> This requires applying or updating the `ama-metrics-settings-configmap` configmap and installing `windows-exporter` on all Windows nodes. For more information, see the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection).
+ ## Metrics scraped for Windows The following metrics are collected when windows-exporter and kube-proxy-windows are enabled.
The following metrics are collected when windows-exporter and kube-proxy-windows
## Dashboards
-The following default dashboards are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [this GitHub folder](https://aka.ms/azureprometheus-mixins).
+The following default dashboards are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [this GitHub repository](https://aka.ms/azureprometheus-mixins). The below dashboards will be provisioned in the specified Azure Grafana instance under `Managed Prometheus` folder in Grafana. These are the standard open source community dashboards for monitoring Kubernetes clusters with Prometheus and Grafana.
-- Kubernetes / Compute Resources / Cluster-- Kubernetes / Compute Resources / Namespace (Pods)-- Kubernetes / Compute Resources / Node (Pods)-- Kubernetes / Compute Resources / Pod-- Kubernetes / Compute Resources / Namespace (Workloads)-- Kubernetes / Compute Resources / Workload-- Kubernetes / Kubelet-- Node Exporter / USE Method / Node-- Node Exporter / Nodes-- Kubernetes / Compute Resources / Cluster (Windows)-- Kubernetes / Compute Resources / Namespace (Windows)-- Kubernetes / Compute Resources / Pod (Windows)-- Kubernetes / USE Method / Cluster (Windows)-- Kubernetes / USE Method / Node (Windows)
+- `Kubernetes / Compute Resources / Cluster`
+- `Kubernetes / Compute Resources / Namespace (Pods)`
+- `Kubernetes / Compute Resources / Node (Pods)`
+- `Kubernetes / Compute Resources / Pod`
+- `Kubernetes / Compute Resources / Namespace (Workloads)`
+- `Kubernetes / Compute Resources / Workload`
+- `Kubernetes / Kubelet`
+- `Node Exporter / USE Method / Node`
+- `Node Exporter / Nodes`
+- `Kubernetes / Compute Resources / Cluster (Windows)`
+- `Kubernetes / Compute Resources / Namespace (Windows)`
+- `Kubernetes / Compute Resources / Pod (Windows)`
+- `Kubernetes / USE Method / Cluster (Windows)`
+- `Kubernetes / USE Method / Node (Windows)`
## Recording rules
-The following default recording rules are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these recording rules can be found in [this GitHub folder](https://aka.ms/azureprometheus-mixins).
+The following default recording rules are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these recording rules can be found in [this GitHub repository](https://aka.ms/azureprometheus-mixins). These are the standard open source recording rules used in the dashboards above.
- `cluster:node_cpu:ratio_rate5m` - `namespace_cpu:kube_pod_container_resource_requests:sum`
azure-monitor Prometheus Metrics Scrape Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-scale.md
This article provides guidance on performance that can be expected when collecti
## CPU and memory
-The CPU and memory usage is correlated with the number of bytes of each sample and the number of samples scraped. The benchmarks below are based on the [default targets scraped](prometheus-metrics-scrape-default.md), volume of custom metrics scraped, and number of nodes, pods, and containers. These numbers are meant as a reference since usage can still vary significantly depending on the number of timeseries and bytes per metric.
+The CPU and memory usage is correlated with the number of bytes of each sample and the number of samples scraped. These benchmarks are based on the [default targets scraped](prometheus-metrics-scrape-default.md), volume of custom metrics scraped, and number of nodes, pods, and containers. These numbers are meant as a reference since usage can still vary significantly depending on the number of time series and bytes per metric.
-The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. This limitation will be eliminated when sharding is added to the feature.
+The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. This limitation is addressed when sharding is added in future.
-The Container insights agent consists of a deployment with one replica and daemonset for scraping metrics. The daemonset scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replicaset scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery.
+The agent consists of a deployment with one replica and DaemonSet for scraping metrics. The DaemonSet scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replicaset scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery.
-## Comparison between small and large cluster for replicaset
+## Comparison between small and large cluster for replica
| Scrape Targets | Samples Sent / Minute | Node Count | Pod Count | Prometheus-Collector CPU Usage (cores) |Prometheus-Collector Memory Usage (bytes) | |:|:|:|:|:|:|
The Container insights agent consists of a deployment with one replica and daemo
| default targets | 260,000 | 340 | 13000 | 1.10 c | 1.70 GB | | default targets<br>+ custom targets | 3.56 million | 340 | 13000 | 5.13 c | 9.52 GB |
-## Comparison between small and large cluster for daemonsets
+## Comparison between small and large cluster for DaemonSets
| Scrape Targets | Samples Sent / Minute Total | Samples Sent / Minute / Pod | Node Count | Pod Count | Prometheus-Collector CPU Usage Total (cores) |Prometheus-Collector Memory Usage Total (bytes) | Prometheus-Collector CPU Usage / Pod (cores) |Prometheus-Collector Memory Usage / Pod (bytes) | |:|:|:|:|:|:|:|:|:| | default targets | 9,858 | 3,327 | 3 | 40 | 41.9 mc | 581 Mi | 14.7 mc | 189 Mi | | default targets | 2.3 million | 14,400 | 340 | 13000 | 805 mc | 305.34 GB | 2.36 mc | 898 Mi |
-For more custom metrics, the single pod will behave the same as the replicaset pod depending on the volume of custom metrics.
+For more custom metrics, the single pod behaves the same as the replica pod depending on the volume of custom metrics.
-## Schedule ama-metrics replicaset pod on a nodepool with more resources
+## Schedule ama-metrics replica pod on a node pool with more resources
-A large volume of metrics per pod will require a large enough node to be able to handle the CPU and memory usage required. If the *ama-metrics* replicaset pod doesn't get scheduled on a node that has enough resources, it might keep getting OOMKilled and go to CrashLoopBackoff. In order to overcome this issue, if you have a node on your cluster that has higher resources (preferably in the system nodepool) and want to get the replicaset scheduled on that node, you can add the label `azuremonitor/metrics.replica.preferred=true` on the node and the replicaset pod will get scheduled on this node.
+A large volume of metrics per pod requires a large enough node to be able to handle the CPU and memory usage required. If the *ama-metrics* replica pod doesn't get scheduled on a node that has enough resources, it might keep getting OOMKilled and go to CrashLoopBackoff. In order to overcome this issue, if you have a node on your cluster that has higher resources (preferably in the system node pool) and want to get the replica scheduled on that node, you can add the label `azuremonitor/metrics.replica.preferred=true` on the node and the replica pod will get scheduled on this node.
``` kubectl label nodes <node-name> azuremonitor/metrics.replica.preferred="true"
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-validate.md
# Create and validate custom configuration file for Prometheus metrics in Azure Monitor (preview)
-In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide additional scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
+In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide more scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
-The 2 configmaps that can be used for custom target scraping are -
-- ama-metrics-prometheus-config - When a configmap with this name is created, the scraping of custom targets is done by the replicaset.-- ama-metrics-prometheus-config-node - When a configmap with this name is created, the scraping of custom targets is done by each daemonset. See [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset) for more details .
+The three configmaps that can be used for custom target scraping are -
+- ama-metrics-prometheus-config - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.
+- ama-metrics-prometheus-config-node - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
+- ama-metrics-prometheus-config-node-windows - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
## Create Prometheus configuration file
-Create a Prometheus scrape configuration file named `prometheus-config`. See the [configuration tips and examples](prometheus-metrics-scrape-configuration.md#prometheus-configuration-tips-and-examples) for more details on authoring scrape config for Prometheus. You can also refer to [Prometheus.io](https://aka.ms/azureprometheus-promio) scrape configuration [reference](https://aka.ms/azureprometheus-promioconfig-scrape). Your config file will list the scrape configs under the section `scrape_configs` and can optionally use the global section for setting the global `scrape_interval`, `scrape_timeout`, and `external_labels`.
+
+One easier way to author Prometheus scrape configuration jobs:
+- Step:1 Use a config file (yaml) to author/define scrape jobs
+- Step:2 Validate the scrape config file using a custom tool (as specified in this article) and then convert that configfile to configmap
+- Step:3 Deploy the scrape config file as configmap to your clusters.
+
+Doing this way is easier to author yaml config (which is extremely space sensitive), and not add unintended spaces by directly authoring scrape config inside config map.
+
+Create a Prometheus scrape configuration file named `prometheus-config`. For more information, see [configuration tips and examples](prometheus-metrics-scrape-configuration.md#prometheus-configuration-tips-and-examples) which gives more details on authoring scrape config for Prometheus. You can also refer to [Prometheus.io](https://aka.ms/azureprometheus-promio) scrape configuration [reference](https://aka.ms/azureprometheus-promioconfig-scrape). Your config file lists the scrape configs under the section `scrape_configs` section and can optionally use the global section for setting the global `scrape_interval`, `scrape_timeout`, and `external_labels`.
> [!TIP] > Changes to global section will impact the default configs and the custom config.
-Below is a sample Prometheus scrape config file:
+Here is a sample Prometheus scrape config file:
``` global:
- scrape_interval: 60s
-scrape_configs:
-- job_name: node
- scrape_interval: 30s
- scheme: http
- kubernetes_sd_configs:
- - role: endpoints
- namespaces:
- names:
- - node-exporter
- relabel_configs:
- - source_labels: [__meta_kubernetes_endpoints_name]
- action: keep
- regex: "dev-cluster-node-exporter-release-prometheus-node-exporter"
- - source_labels: [__metrics_path__]
- regex: (.*)
- target_label: metrics_path
- - source_labels: [__meta_kubernetes_endpoint_node_name]
- regex: (.*)
- target_label: instance
--- job_name: kube-state-metrics scrape_interval: 30s
+scrape_configs:
+- job_name: my_static_config
+ scrape_interval: 60s
static_configs:
- - targets: ['dev-cluster-kube-state-metrics-release.kube-state-metrics.svc.cluster.local:8080']
+ - targets: ['my-static-service.svc.cluster.local:1234']
-- job_name: prometheus_ref_app
+- job_name: prometheus_example_app
scheme: http kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_name] action: keep
- regex: "prometheus-reference-service"
+ regex: "prometheus-example-service"
``` ## Validate the scrape config file
-The agent uses the `promconfigvalidator` tool to validate the Prometheus config given to it through the configmap. If the config isn't valid, then the custom configuration given won't be used by the agent. Once you have your Prometheus config file, you can *optionally* use the `promconfigvalidator` tool to validate your config before creating a configmap that the agent consumes.
+The agent uses a custom `promconfigvalidator` tool to validate the Prometheus config given to it through the configmap. If the config isn't valid, then the custom configuration given gets rejected by the addon agent. Once you have your Prometheus config file, you can *optionally* use the `promconfigvalidator` tool to validate your config before creating a configmap that the agent consumes.
-The `promconfigvalidator` tool is inside the Azure Monitor metrics addon. You can use any of the `ama-metrics-node-*` pods in `kube-system` namespace in your cluster to download the tool for validation. Use `kubectl cp` to download the tool and its configuration as shown below:
+The `promconfigvalidator` tool is shipped inside the Azure Monitor metrics addon pod(s). You can use any of the `ama-metrics-node-*` pods in `kube-system` namespace in your cluster to download the tool for validation. Use `kubectl cp` to download the tool and its configuration:
``` for podname in $(kubectl get pods -l rsName=ama-metrics -n=kube-system -o json | jq -r '.items[].metadata.name'); do kubectl cp -n=kube-system "${podname}":/opt/promconfigvalidator ./promconfigvalidator; kubectl cp -n=kube-system "${podname}":/opt/microsoft/otelcollector/collector-config-template.yml ./collector-config-template.yml; chmod 500 promconfigvalidator; done ```
-After copying the executable and the yaml, locate the path of your Prometheus configuration file. Then replace `<config path>` below and run the validator with the command:
+After copying the executable and the yaml, locate the path of your Prometheus configuration file that you authored. Then replace `<config path>` in the command and run the validator with the command:
``` ./promconfigvalidator/promconfigvalidator --config "<config path>" --otelTemplate "./promconfigvalidator/collector-config-template.yml" ```
-Running the validator generates the merged configuration file `merged-otel-config.yaml` if no path is provided with the optional `output` parameter. Don't use this merged file as config to the metrics collector agent, as it's only used for tool validation and debugging purposes.
+Running the validator generates the merged configuration file `merged-otel-config.yaml` if no path is provided with the optional `output` parameter. Don't use this autogenerated merged file as config to the metrics collector agent, as it's only used for tool validation and debugging purposes.
-### Apply config file
-Your custom Prometheus configuration file is consumed as a field named `prometheus-config` in a configmap called `ama-metrics-prometheus-config` in the `kube-system` namespace. You can create a configmap from a file by renaming your Prometheus configuration file to `prometheus-config` (with no file extension) and running the following command:
+### Deploy config file as configmap
+Your custom Prometheus configuration file is consumed as a field named `prometheus-config` inside metrics addon configmap(s) `ama-metrics-prometheus-config` (or) `ama-metrics-prometheus-config-node` (or) `ama-metrics-prometheus-config-node-windows` in the `kube-system` namespace. You can create a configmap from the scrape config file you created above, by renaming your Prometheus configuration file to `prometheus-config` (with no file extension) and running one or more of the following commands, depending on which configmap you want to create for your custom scrape job(s) config.
+Ex;- to create configmap to be used by replicsset
``` kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-system ```
+This creates a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics replica pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` replica pods
+
+Ex;- to create configmap to be used by linux DaemonSet
+```
+kubectl create configmap ama-metrics-prometheus-config-node --from-file=prometheus-config -n kube-system
+```
+This creates a configmap named `ama-metrics-prometheus-config-node` in `kube-system` namespace. Every Azure Monitor metrics Linux DaemonSet pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics-node` linux deamonset pods
++
+Ex;- to create configmap to be used by windows DaemonSet
+```
+kubectl create configmap ama-metrics-prometheus-config-node-windows --from-file=prometheus-config -n kube-system
+```
+
+This creates a configmap named `ama-metrics-prometheus-config-node-windows` in `kube-system` namespace. Every Azure Monitor metrics Windows DaemonSet pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics-win-node` windows deamonset pods
+ *Ensure that the Prometheus config file is named `prometheus-config` before running the following command since the file name is used as the configmap setting name.*
-This will create a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics pod will then restart to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` pods.
+This creates a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics pod restarts to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` pods.
A sample of the `ama-metrics-prometheus-config` configmap is [here](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-prometheus-config-configmap.yaml). ### Troubleshooting
-If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replicaset pod** logs for **ama-metrics-prometheus-config** configmap or **daemonset pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
+If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replica pod** logs for **ama-metrics-prometheus-config** configmap or **DaemonSet pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
## Next steps
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-troubleshoot.md
Follow the steps in this article to determine the cause of Prometheus metrics not being collected as expected in Azure Monitor.
-Note that the ReplicaSet pod scrapes metrics from `kube-state-metrics` and custom scrape targets in the `ama-metrics-prometheus-config` configmap. The DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you will want to view the logs and the Prometheus UI for will depend on which scrape target you are investigating.
+Replica pod scrapes metrics from `kube-state-metrics` and custom scrape targets in the `ama-metrics-prometheus-config` configmap. DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you want to view the logs and the Prometheus UI for it depends on which scrape target you're investigating.
+
+## Metrics Throttling
+
+In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minuted Ingested % Utilization` are below 100%.
++
+If either of them are more than 100%, ingestion into this workspace is being throttled. In the same workspace, navigate to `New Support Request` to create a request to increase the limits. Select the issue type as `Service and subscription limits (quotas)` and the quota type as `Managed Prometheus`.
## Pod status
Check the pod status with the following command:
kubectl get pods -n kube-system | grep ama-metrics ``` -- There should be one `ama-metrics-xxxxxxxxxx-xxxxx` replicaset pod, one `ama-metrics-ksm-*` pod, and an `ama-metrics-node-*` pod for each node on the cluster.
+- There should be one `ama-metrics-xxxxxxxxxx-xxxxx` replica pod, one `ama-metrics-ksm-*` pod, and an `ama-metrics-node-*` pod for each node on the cluster.
- Each pod state should be `Running` and have an equal number of restarts to the number of configmap changes that have been applied: :::image type="content" source="media/prometheus-metrics-troubleshoot/pod-status.png" alt-text="Screenshot showing pod status." lightbox="media/prometheus-metrics-troubleshoot/pod-status.png":::
If each pod state is `Running` but one or more pods have restarts, run the follo
kubectl describe pod <ama-metrics pod name> -n kube-system ``` -- This provides the reason for the restarts. Pod restarts are expected if configmap changes have been made. If the reason for the restart is `OOMKilled`, the pod can't keep up with the volume of metrics. See the scale recommendations for the volume of metrics.
+- This command provides the reason for the restarts. Pod restarts are expected if configmap changes have been made. If the reason for the restart is `OOMKilled`, the pod can't keep up with the volume of metrics. See the scale recommendations for the volume of metrics.
If the pods are running as expected, the next place to check is the container logs.
kubectl logs <ama-metrics pod name> -n kube-system -c prometheus-collector
At startup, any initial errors are printed in red, while warnings are printed in yellow. (Viewing the colored logs requires at least PowerShell version 7 or a linux distribution.) - Verify if there's an issue with getting the authentication token:
- - The message *No configuration present for the AKS resource* will be logged every 5 minutes.
- * The pod will restart every 15 minutes to try again with the error: *No configuration present for the AKS resource*.
+ - The message *No configuration present for the AKS resource* gets logged every 5 minutes.
+ - The pod restarts every 15 minutes to try again with the error: *No configuration present for the AKS resource*.
+ - If so, check that the Data Collection Rule and Data Collection Endpoint exist in your resource group.
+ - Also verify that the Azure Monitor Workspace exists.
+ - Verify that you don't have a private AKS cluster and that it's not linked to an Azure Monitor Private Link Scope for any other service. This is currently not a supported scenario.
- Verify there are no errors with parsing the Prometheus config, merging with any default scrape targets enabled, and validating the full config.-- Verify there are no errors from MetricsExtension regarding authenticating with the Azure Monitor workspace.-- Verify there are no errors from the OpenTelemetry collector about scraping the targets.
+- If you did include a custom Prometheus config, verify that it is recognized in the logs. If not:
+ - Verify that your configmap has the correct name: `ama-metrics-prometheus-config` in the `kube-system` namespace.
+ - Verify that in the configmap your Prometheus config is under a section called `prometheus-config` under `data` like shown here:
+ ```
+ kind: ConfigMap
+ apiVersion: v1
+ metadata:
+ name: ama-metrics-prometheus-config
+ namespace: kube-system
+ data:
+ prometheus-config: |-
+ scrape_configs:
+ - job_name: <your scrape job here>
+ ```
+- Verify there are no errors from `MetricsExtension` regarding authenticating with the Azure Monitor workspace.
+- Verify there are no errors from the `OpenTelemetry collector` about scraping the targets.
Run the following command:
Run the following command:
kubectl logs <ama-metrics pod name> -n kube-system -c addon-token-adapter ``` -- This will show an error if there's an issue with authenticating with the Azure Monitor workspace. Following is an example of logs with no issues:--
+- This command shows an error if there's an issue with authenticating with the Azure Monitor workspace. Shown here is an example of logs with no issues:
+ :::image type="content" source="media/prometheus-metrics-troubleshoot/addon-token-adapter.png" alt-text="Screenshot showing addon token log." lightbox="media/prometheus-metrics-troubleshoot/addon-token-adapter.png" :::
If there are no errors in the logs, the Prometheus interface can be used for debugging to verify the expected configuration and targets being scraped. ## Prometheus interface
-Every `ama-metrics-*` pod has the Prometheus Agent mode User Interface available on port 9090/ Port forward into either the replicaset or the daemonset to check the config, service discovery and targets endpoints as described below. This is used to verify the custom configs are correct, the intended targets have been discovered for each job, and there are no errors with scraping specific targets.
+Every `ama-metrics-*` pod has the Prometheus Agent mode User Interface available on port 9090. Port-forward into either the replica pod or one of the daemonset pods to check the config, service discovery and targets endpoints as described here to verify the custom configs are correct, the intended targets have been discovered for each job, and there are no errors with scraping specific targets.
Run the command `kubectl port-forward <ama-metrics pod> -n kube-system 9090`. -- Open a browser to the address `127.0.0.1:9090/config`. This will have the full scrape configs. Verify all jobs are included in the config.
+- Open a browser to the address `127.0.0.1:9090/config`. This Ux has the full scrape config. Verify all jobs are included in the config.
:::image type="content" source="media/prometheus-metrics-troubleshoot/config-ui.png" alt-text="Screenshot showing configuration jobs." lightbox="media/prometheus-metrics-troubleshoot/config-ui.png"::: -- Go to `127.0.0.1:9090/service-discovery` to view the targets discovered by the service discovery object specified and what the relabel_configs have filtered the targets to be. For example, if missing metrics from a certain pod, you can find if that pod was discovered and what its URI is. You can then use this URI when looking at the targets to see if there are any scrape errors.
+- Go to `127.0.0.1:9090/service-discovery` to view the targets discovered by the service discovery object specified and what the relabel_configs have filtered the targets to be. For example, when missing metrics from a certain pod, you can find if that pod was discovered and what its URI is. You can then use this URI when looking at the targets to see if there are any scrape errors.
:::image type="content" source="media/prometheus-metrics-troubleshoot/service-discovery.png" alt-text="Screenshot showing service discovery." lightbox="media/prometheus-metrics-troubleshoot/service-discovery.png":::
When enabled, all Prometheus metrics that are scraped are hosted at port 9090. R
kubectl port-forward <ama-metrics pod name> -n kube-system 9091 ```
-Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This can be done for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. See below for the service limits for Prometheus metrics.
+Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This Ux can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
## Metric names, label names & label values
Agent based scraping currently has the limitations in the following table:
| Property | Limit | |:|:|
-| Label name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
-| Label value length | Less than or equal to 1023 characters. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
-| Number of labels per timeseries | Less than or equal to 63. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
-| Metric name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, only that particular series will be dropped. MetricextensionConsoleDebugLog will have traces for the dropped metric. |
+| Label name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, the entire scrape job fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Label value length | Less than or equal to 1023 characters. When this limit is exceeded for any time-series in a job, the entire scrape fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Number of labels per time series | Less than or equal to 63. When this limit is exceeded for any time-series in a job, the entire scrape job fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Metric name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, only that particular series get dropped. MetricextensionConsoleDebugLog has traces for the dropped metric. |
+| Label names with different casing | Two labels within the same metric sample with different casing gets treated as having duplicate labels and are dropped when ingested. For example, the time series `my_metric{ExampleLabel="label_value_0", examplelabel="label_value_1}` is dropped due to duplicate labels since `ExampleLabel` and `examplelabel` are seen as the same label name. |
+
+## Check ingestion quota on Azure Monitor workspace
+
+If you see metrics missed ,you can first check if the ingestion limits are being exceeded for your Azure Monitor workspace. In the Azure portal, you can check the current usage for any Azure monitor Workspace. You can see current usage metrics under `Metrics` menu for the Azure Monitor workspace. Following utilization metrics are availabie as standard metrics for each Azure Monitor workspace.
+
+- Active Time Series - The number of unique time series recently ingested into the workspace over the previous 12 hours
+- Active Time Series Limit - The limit on the number of unique time series, that can be actively ingested into the workspace
+- Active Time Series % Utilization - The percentage of current active time series being utilized
+- Events Per Minute Ingested - The number of events (samples) per minute recently received
+- Events Per Minute Ingested Limit - The maximum number of events per minute that can be ingested before getting throttled
+- Events Per Minute Ingested % Utilization - The percentage of current metric ingestion rate limit being util
+
+Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal.
+ ## Next steps
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
There are two types of Prometheus rules as described in the following table.
| Type | Description | |:|:| | Alert | Alert rules let you create an Azure Monitor alert based on the results of a Prometheus Query Language (Prom QL) query. |
-| Recording | Recording rules allow you to pre-compute frequently needed or computationally extensive expressions and store their result as a new set of time series. Querying the precomputed result will then often be much faster than executing the original expression every time it's needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh, or for use in alert rules, where multiple alert rules may be based on the same complex query. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. |
+| Recording | Recording rules allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Querying the precomputed result will then often be much faster than executing the original expression every time it's needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh, or for use in alert rules, where multiple alert rules may be based on the same complex query. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. |
## View Prometheus rule groups You can view the rule groups and their included rules in the Azure portal by selecting **Rule groups** from the Azure Monitor workspace.
To enable or disable a rule, click on the rule in the Azure portal. Select eithe
## Create Prometheus rules
-In the public preview, rule groups, recording rules and alert rules are configured using Azure Resource Manager (ARM) templates, API, and provisioning tools. This uses a new resource called **Prometheus Rule Group**. You can create and configure rule group resources where the alert rules and recording rules are defined as part of the rule group properties. Azure Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md).
+In the public preview, rule groups, recording rules and alert rules are configured using Azure Resource Manager (ARM) templates, the API, and provisioning tools. This uses a new resource called **Prometheus Rule Group**. You can create and configure rule group resources where the alert rules and recording rules are defined as part of the rule group properties. Azure Monitor Managed Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md).
-You can use a Resource Manager template to create and configure Prometheus rule groups, alert rules and recording rules. Resource Manager templates enable you to programmatically set up alert and recording rules in a consistent and reproducible way across your environments.
+You can use a Resource Manager template to create and configure Prometheus rule groups, alert rules, and recording rules. Resource Manager templates enable you to programmatically set up alert and recording rules in a consistent and reproducible way across all your environments.
The basic steps are as follows:
The basic steps are as follows:
### Limiting rules to a specific cluster
-You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
-You should try to limit rules to a single cluster if your monitoring workspace contains a large scale of data from multiple clusters, and there's a concern that running a single set of rules on all the data may cause performance or throttling issues. Using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, limiting each group to cover a different cluster.
+You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
+You should try to limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters and if there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster.
- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection. - If `clusterName` is not specified for a specific rule group, the rules in the group will query all the data in the workspace from all clusters.
Below is a sample template that creates a Prometheus rule group, including one r
The following tables describe each of the properties in the rule definition. ### Rule group
-The rule group will have the following properties, whether it includes alerting rule, recording rule, or both.
+The rule group will always have the following properties, whether it includes an alerting rule, a recording rule, or both.
| Name | Required | Type | Description | |:|:|:|:|
The `rules` section will have the following properties for alerting rules.
## Next steps -- [Use preconfigured alert rules for your Kubernetes cluster](../containers/container-insights-metric-alerts.md). - [Learn more about the Azure alerts](../alerts/alerts-types.md). - [Prometheus documentation for recording rules](https://aka.ms/azureprometheus-promio-recrules). - [Prometheus documentation for alerting rules](https://aka.ms/azureprometheus-promio-alertrules).
azure-monitor Prometheus Self Managed Grafana Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-self-managed-grafana-azure-active-directory.md
To set up Azure Active Directory authentication, follow the steps below:
1. Note the **Application (client) ID** and **Directory(Tenant) ID**. They're used in the Grafana authentication settings. :::image type="content" source="./media/prometheus-self-managed-grafana-azure-active-directory/app-registration-overview.png" alt-text="A screenshot showing the App registration overview page."::: 1. On the app's overview page, select **Certificates and Secrets**.
-1. In the Client secrets tab, select **New client secret**.
+1. In the client secrets tab, select **New client secret**.
1. Enter a **Description**. 1. Select an **expiry** period from the dropdown and select **Add**. > [!NOTE]
Allow your app to query data from your Azure Monitor workspace.
1. Select **Add**, then **Add role assignment** from the **Access Control (IAM)** page.
-1. On the Add role Assignment page, search for *Monitoring*.
+1. On the **Add role Assignment** page, search for **Monitoring**.
1. Select **Monitoring data reader**, then select the **Members** tab. :::image type="content" source="./media/prometheus-self-managed-grafana-azure-active-directory/add-role-assignment.png" alt-text="A screenshot showing the Add role assignment page":::
Allow your app to query data from your Azure Monitor workspace.
1. Select **Review + assign**. :::image type="content" source="./media/prometheus-self-managed-grafana-azure-active-directory/select-members.png" alt-text="A screenshot showing the Add role assignment, select members page.":::
-You've created your App registration and have assigned it access to query data from your Azure Monitor workspace. The next step is setting up your Grafana data source.
+You've created your App registration and have assigned it access to query data from your Azure Monitor workspace. The next step is setting up your Prometheus data source in Grafana.
## Setup self-managed Grafana to turn on Azure Authentication.
-Grafana now supports connecting to Azure-managed Prometheus using the \https://grafana.com/docs/grafana/v9.0/setup-grafana/configure-grafana/Prometheus data source. Self-managed instances, a configuration change is needed to see the Azure Authentication option in Grafana. For self-hosted Grafana or other non-Azure-managed Grafana service, make the following changes:
+Grafana now supports connecting to Azure Monitor managed Prometheus using the [Prometheus data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). For self-hosted Grafana instances, a configuration change is needed to use the Azure Authentication option in Grafana. For self-hosted Grafana or any other Grafana instances that are not managed by Azure, make the following changes:
1. Locate the Grafana configuration file. See the [Configure Grafana](https://grafana.com/docs/grafana/v9.0/setup-grafana/configure-grafana/) documentation for details. 1. Identity your Grafana version.
Grafana now supports connecting to Azure-managed Prometheus using the \https://g
azure_auth_enabled = true ```
- For Azure-managed Grafana, you don't need to make any configuration changes. Managed Identity is also enabled by default.
-## Configure Grafana data source
+ For Azure Managed Grafana, you don't need to make any configuration changes. Managed Identity is also enabled by default.
+
+## Configure your Prometheus data source in Grafana
1. Sign-in to your Grafana instance.
Grafana now supports connecting to Azure-managed Prometheus using the \https://g
1. Select **Save & test** :::image type="content" source="./media/prometheus-self-managed-grafana-azure-active-directory/configure-grafana.png" alt-text="A screenshot showing the Grafana settings page for adding a data source.":::
-## Next steps.
+## Next steps
- [Configure Grafana using managed system identity](./prometheus-grafana.md). - [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).
azure-monitor Prometheus Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md
# Query Prometheus metrics using Azure workbooks (preview) Create dashboards powered by Azure Monitor managed service for Prometheus using [Azure Workbooks](../visualize/workbooks-overview.md).
-This article introduces workbooks for Azure Monitor workspaces and shows you how to query Prometheus metrics using Azure workbooks and PromQL query language.
+This article introduces workbooks for Azure Monitor workspaces and shows you how to query Prometheus metrics using Azure workbooks and the Prometheus query language (PromQL).
## Pre-requisites To query Prometheus metrics from an Azure Monitor workspace you need the following: - An Azure Monitor workspace. To create an Azure Monitor workspace see [Create an Azure Monitor Workspace](./azure-monitor-workspace-overview.md?tabs=azure-portal.md). - Your Azure Monitor workspace must be [collecting Prometheus metrics](./prometheus-metrics-enable.md) from an AKS cluster.-- You have the **Monitoring Data Reader** role assigned for the Azure Monitor workspace.
+- The user must be assigned the **Monitoring Data Reader** role for the Azure Monitor workspace.
> [!NOTE]
-> Querying data from an Azure Monitor workspace is a data plane operation. Even if you are an owner or have elevated control plane access, you still need to assign the Monitoring Data Reader role. For more information, see [Azure control and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md).
+> Querying data from an Azure Monitor workspace is a data plane operation. Even if you are an owner or have elevated control plane access, you still need to assign the **Monitoring Data Reader** role. For more information, see [Azure control and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md).
## Prometheus Explorer workbook Azure Monitor workspaces include an exploration workbook to query your Prometheus metrics.
Azure Monitor workspaces include an exploration workbook to query your Prometheu
A workbook has the following input options: - **Time Range**. Select the period of time that you want to include in your query. Select **Custom** to set a start and end time. - **PromQL**. Enter the PromQL query to retrieve your data. For more information about PromQL, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#querying-prometheus).-- **Graph**, **Grid**, and **Dimensions** tabs. Switch between a graphic, tabular and dimensional view of the query output.
+- **Graph**, **Grid**, and **Dimensions** tabs. Switch between a graphic, tabular, and dimensional view of the query output.
![Screenshot that shows PromQL explorer.](./media/prometheus-workbooks/prometheus-explorer.png)
Workbooks supports many visualizations and Azure integrations. For more informat
If your workbook query does not return data: -- Check that you have Monitoring Data Reader role permissions assigned through Access Control (IAM) in your Azure Monitor workspace
+- Check that you have **Monitoring Data Reader** role permissions assigned through Access Control (IAM) in your Azure Monitor workspace
- Verify that you have turned on metrics collection in the Monitored clusters blade of your Azure Monitor workspace.
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 04/13/2023 Last updated : 05/07/2023
Following is a list of the types of logs available for each resource type.
Some categories might be supported only for specific types of resources. See the resource-specific documentation if you feel you're missing a resource. For example, Microsoft.Sql/servers/databases categories aren't available for all types of databases. For more information, see [information on SQL Database diagnostic logging](/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure). If you think something is missing, you can open a GitHub comment at the bottom of this article.-
-## Microsoft.AAD/DomainServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AccountLogon |AccountLogon |No |
-|AccountManagement |AccountManagement |No |
-|DetailTracking |DetailTracking |No |
-|DirectoryServiceAccess |DirectoryServiceAccess |No |
-|LogonLogoff |LogonLogoff |No |
-|ObjectAccess |ObjectAccess |No |
-|PolicyChange |PolicyChange |No |
-|PrivilegeUse |PrivilegeUse |No |
-|SystemSecurity |SystemSecurity |No |
-
-## microsoft.aadiam/tenants
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Signin |Signin |Yes |
-
-## Microsoft.AgFoodPlatform/farmBeats
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationAuditLogs |Application Audit Logs |Yes |
-|FarmManagementLogs |Farm Management Logs |Yes |
-|FarmOperationLogs |Farm Operation Logs |Yes |
-|InsightLogs |Insight Logs |Yes |
-|JobProcessedLogs |Job Processed Logs |Yes |
-|ModelInferenceLogs |Model Inference Logs |Yes |
-|ProviderAuthLogs |Provider Auth Logs |Yes |
-|SatelliteLogs |Satellite Logs |Yes |
-|SensorManagementLogs |Sensor Management Logs |Yes |
-|WeatherLogs |Weather Logs |Yes |
-
-## Microsoft.AnalysisServices/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-|Service |Service |No |
-
-## Microsoft.ApiManagement/service
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayLogs |Logs related to ApiManagement Gateway |No |
-|WebSocketConnectionLogs |Logs related to Websocket Connections |Yes |
-
-## Microsoft.App/managedEnvironments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppEnvSpringAppConsoleLogs |Spring App console logs |Yes |
-|ContainerAppConsoleLogs |Container App console logs |Yes |
-|ContainerAppSystemLogs |Container App system logs |Yes |
-
-## Microsoft.AppConfiguration/configurationStores
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|HttpRequest |HTTP Requests |Yes |
-
-## Microsoft.AppPlatform/Spring
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationConsole |Application Console |No |
-|BuildLogs |Build Logs |Yes |
-|ContainerEventLogs |Container Event Logs |Yes |
-|IngressLogs |Ingress Logs |Yes |
-|SystemLogs |System Logs |No |
-
-## Microsoft.Attestation/attestationProviders
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent message log category. |No |
-|NotProcessed |Requests which could not be processed. |Yes |
-|Operational |Operational message log category. |Yes |
-
-## Microsoft.Automation/automationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent |Yes |
-|DscNodeStatus |DscNodeStatus |No |
-|JobLogs |JobLogs |No |
-|JobStreams |JobStreams |No |
-
-## Microsoft.AutonomousDevelopmentPlatform/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
-
-## Microsoft.AutonomousDevelopmentPlatform/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
-
-## microsoft.avs/privateClouds
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|vmwaresyslog |VMware Syslog |Yes |
-
-## microsoft.azuresphere/catalogs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit Logs |Yes |
-|DeviceEvents |Device Events |Yes |
-
-## Microsoft.Batch/batchaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLog |Audit Logs |Yes |
-|ServiceLog |Service Logs |No |
-|ServiceLogs |Service Logs |Yes |
-
-## microsoft.botservice/botservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BotRequest |Requests from the channels to the bot |Yes |
-
-## Microsoft.Cache/redis
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectedClientList |Connected client list |Yes |
-
-## Microsoft.Cache/redisEnterprise/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectionEvents |Connection events (New Connection/Authentication/Disconnection) |Yes |
-
-## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WebApplicationFirewallLogs |Web Appliation Firewall Logs |No |
-
-## Microsoft.Cdn/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AzureCdnAccessLog |Azure Cdn Access Log |No |
-|FrontDoorAccessLog |FrontDoor Access Log |Yes |
-|FrontDoorHealthProbeLog |FrontDoor Health Probe Log |Yes |
-|FrontDoorWebApplicationFirewallLog |FrontDoor WebApplicationFirewall Log |Yes |
-
-## Microsoft.Cdn/profiles/endpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
-
-## Microsoft.Chaos/experiments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ExperimentOrchestration |Experiment Orchestration Events |Yes |
-
-## Microsoft.ClassicNetwork/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No |
-
-## Microsoft.CodeSigning/codesigningaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SignTransactions |Sign Transactions |Yes |
-
-## Microsoft.CognitiveServices/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|RequestResponse |Request and Response Logs |No |
-|Trace |Trace Logs |No |
-
-## Microsoft.Communication/CommunicationServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuthOperational |Operational Authentication Logs |Yes |
-|CallAutomationOperational |Operational Call Automation Logs |Yes |
-|CallDiagnostics |Call Diagnostics Logs |Yes |
-|CallRecordingSummary |Call Recording Summary Logs |Yes |
-|CallSummary |Call Summary Logs |Yes |
-|ChatOperational |Operational Chat Logs |No |
-|EmailSendMailOperational |Email Service Send Mail Logs |Yes |
-|EmailStatusUpdateOperational |Email Service Delivery Status Update Logs |Yes |
-|EmailUserEngagementOperational |Email Service User Engagement Logs |Yes |
-|JobRouterOperational |Operational Job Router Logs |Yes |
-|NetworkTraversalDiagnostics |Network Traversal Relay Diagnostic Logs |Yes |
-|NetworkTraversalOperational |Operational Network Traversal Logs |Yes |
-|RoomsOperational |Operational Rooms Logs |Yes |
-|SMSOperational |Operational SMS Logs |No |
-|Usage |Usage Records |No |
-
-## Microsoft.Compute/virtualMachines
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SoftwareUpdateProfile |SoftwareUpdateProfile |Yes |
-|SoftwareUpdates |SoftwareUpdates |Yes |
-
-## Microsoft.ConfidentialLedger/ManagedCCF
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
-
-## Microsoft.ConfidentialLedger/ManagedCCFs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
-
-## Microsoft.ConnectedCache/CacheNodes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
-
-## Microsoft.ConnectedCache/ispCustomers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
-
-## Microsoft.ConnectedVehicle/platformAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |MCVP Audit Logs |Yes |
-|Logs |MCVP Logs |Yes |
-
-## Microsoft.ContainerRegistry/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ContainerRegistryLoginEvents |Login Events |No |
-|ContainerRegistryRepositoryEvents |RepositoryEvent logs |No |
-
-## Microsoft.ContainerService/fleets
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |Yes |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |Yes |
-|kube-apiserver |Kubernetes API Server |Yes |
-|kube-audit |Kubernetes Audit |Yes |
-|kube-audit-admin |Kubernetes Audit Admin Logs |Yes |
-|kube-controller-manager |Kubernetes Controller Manager |Yes |
-|kube-scheduler |Kubernetes Scheduler |Yes |
-
-## Microsoft.ContainerService/managedClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |No |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |No |
-|kube-apiserver |Kubernetes API Server |No |
-|kube-audit |Kubernetes Audit |No |
-|kube-audit-admin |Kubernetes Audit Admin Logs |No |
-|kube-controller-manager |Kubernetes Controller Manager |No |
-|kube-scheduler |Kubernetes Scheduler |No |
-
-## Microsoft.CustomProviders/resourceproviders
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs for MiniRP calls |No |
-
-## Microsoft.D365CustomerInsights/instances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit events |No |
-|Operational |Operational events |No |
-
-## Microsoft.Dashboard/grafana
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GrafanaLoginEvents |Grafana Login Events |Yes |
-
-## Microsoft.Databricks/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|accounts |Databricks Accounts |No |
-|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
-|clamAVScan |Databricks Clam AV Scan |Yes |
-|clusterLibraries |Databricks Cluster Libraries |Yes |
-|clusters |Databricks Clusters |No |
-|databrickssql |Databricks DatabricksSQL |Yes |
-|dbfs |Databricks File System |No |
-|deltaPipelines |Databricks Delta Pipelines |Yes |
-|featureStore |Databricks Feature Store |Yes |
-|genie |Databricks Genie |Yes |
-|gitCredentials |Databricks Git Credentials |Yes |
-|globalInitScripts |Databricks Global Init Scripts |Yes |
-|iamRole |Databricks IAM Role |Yes |
-|instancePools |Instance Pools |No |
-|jobs |Databricks Jobs |No |
-|mlflowAcledArtifact |Databricks MLFlow Acled Artifact |Yes |
-|mlflowExperiment |Databricks MLFlow Experiment |Yes |
-|modelRegistry |Databricks Model Registry |Yes |
-|notebook |Databricks Notebook |No |
-|partnerHub |Databricks Partner Hub |Yes |
-|RemoteHistoryService |Databricks Remote History Service |Yes |
-|repos |Databricks Repos |Yes |
-|secrets |Databricks Secrets |No |
-|serverlessRealTimeInference |Databricks Serverless Real-Time Inference |Yes |
-|sqlanalytics |Databricks SQL Analytics |Yes |
-|sqlPermissions |Databricks SQLPermissions |No |
-|ssh |Databricks SSH |No |
-|unityCatalog |Databricks Unity Catalog |Yes |
-|webTerminal |Databricks Web Terminal |Yes |
-|workspace |Databricks Workspace |No |
-
-## Microsoft.DataCollaboration/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CollaborationAudit |Collaboration Audit |Yes |
-|Computations |Computations |Yes |
-|DataAssets |Data Assets |No |
-|Pipelines |Pipelines |No |
-|Pipelines |Pipelines |No |
-|Proposals |Proposals |No |
-|Scripts |Scripts |No |
-
-## Microsoft.DataFactory/factories
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ActivityRuns |Pipeline activity runs log |No |
-|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
-|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
-|AirflowTaskLogs |Airflow task execution logs |Yes |
-|AirflowWebLogs |Airflow web logs |Yes |
-|AirflowWorkerLogs |Airflow worker logs |Yes |
-|PipelineRuns |Pipeline runs log |No |
-|SandboxActivityRuns |Sandbox Activity runs log |Yes |
-|SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
-|SSISIntegrationRuntimeLogs |SSIS integration runtime logs |No |
-|SSISPackageEventMessageContext |SSIS package event message context |No |
-|SSISPackageEventMessages |SSIS package event messages |No |
-|SSISPackageExecutableStatistics |SSIS package executable statistics |No |
-|SSISPackageExecutionComponentPhases |SSIS package execution component phases |No |
-|SSISPackageExecutionDataStatistics |SSIS package exeution data statistics |No |
-|TriggerRuns |Trigger runs log |No |
-
-## Microsoft.DataLakeAnalytics/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|ConfigurationChange |Configuration Change Event Logs |Yes |
-|JobEvent |Job Event Logs |Yes |
-|JobInfo |Job Info Logs |Yes |
-|Requests |Request Logs |No |
-
-## Microsoft.DataLakeStore/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|Requests |Request Logs |No |
-
-## Microsoft.DataProtection/BackupVaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AddonAzureBackupJobs |Addon Azure Backup Job Data |Yes |
-|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |Yes |
-|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |Yes |
-|CoreAzureBackup |Core Azure Backup Data |Yes |
-
-## Microsoft.DataShare/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ReceivedShareSnapshots |Received Share Snapshots |No |
-|SentShareSnapshots |Sent Share Snapshots |No |
-|Shares |Shares |No |
-|ShareSubscriptions |Share Subscriptions |No |
-
-## Microsoft.DBforMariaDB/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MariaDB Audit Logs |No |
-|MySqlSlowLogs |MariaDB Server Logs |No |
-
-## Microsoft.DBforMySQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Slow Logs |No |
-
-## Microsoft.DBforMySQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Server Logs |No |
-
-## Microsoft.DBforPostgreSQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLFlexDatabaseXacts |PostgreSQL remaining transactions |Yes |
-|PostgreSQLFlexQueryStoreRuntime |PostgreSQL Query Store Runtime |Yes |
-|PostgreSQLFlexQueryStoreWaitStats |PostgreSQL Query Store Wait Statistics |Yes |
-|PostgreSQLFlexSessions |PostgreSQL Sessions data |Yes |
-|PostgreSQLFlexTableStats |PostgreSQL Autovacuum and schema statistics |Yes |
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-
-## Microsoft.DBForPostgreSQL/serverGroupsv2
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |Yes |
-
-## Microsoft.DBforPostgreSQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-|QueryStoreRuntimeStatistics |PostgreSQL Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |PostgreSQL Query Store Wait Statistics |No |
-
-## Microsoft.DBforPostgreSQL/serversv2
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-
-## Microsoft.DesktopVirtualization/applicationgroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Management |Management |No |
-
-## Microsoft.DesktopVirtualization/hostpools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AgentHealthStatus |AgentHealthStatus |No |
-|AutoscaleEvaluationPooled |Do not use - internal testing |Yes |
-|Checkpoint |Checkpoint |No |
-|Connection |Connection |No |
-|ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
-|Error |Error |No |
-|HostRegistration |HostRegistration |No |
-|Management |Management |No |
-|NetworkData |Network Data Logs |Yes |
-|SessionHostManagement |Session Host Management Activity Logs |Yes |
-
-## Microsoft.DesktopVirtualization/scalingplans
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Autoscale logs |Yes |
-
-## Microsoft.DesktopVirtualization/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Feed |Feed |No |
-|Management |Management |No |
-
-## Microsoft.DevCenter/devcenters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataplaneAuditEvent |Dataplane audit logs |Yes |
-
-## Microsoft.Devices/IotHubs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|C2DCommands |C2D Commands |No |
-|C2DTwinOperations |C2D Twin Operations |No |
-|Configurations |Configurations |No |
-|Connections |Connections |No |
-|D2CTwinOperations |D2CTwinOperations |No |
-|DeviceIdentityOperations |Device Identity Operations |No |
-|DeviceStreams |Device Streams (Preview) |No |
-|DeviceTelemetry |Device Telemetry |No |
-|DirectMethods |Direct Methods |No |
-|DistributedTracing |Distributed Tracing (Preview) |No |
-|FileUploadOperations |File Upload Operations |No |
-|JobsOperations |Jobs Operations |No |
-|Routes |Routes |No |
-|TwinQueries |Twin Queries |No |
-
-## Microsoft.Devices/provisioningServices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeviceOperations |Device Operations |No |
-|ServiceOperations |Service Operations |No |
-
-## Microsoft.DigitalTwins/digitalTwinsInstances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataHistoryOperation |DataHistoryOperation |Yes |
-|DigitalTwinsOperation |DigitalTwinsOperation |No |
-|EventRoutesOperation |EventRoutesOperation |No |
-|ModelsOperation |ModelsOperation |No |
-|QueryOperation |QueryOperation |No |
-|ResourceProviderOperation |ResourceProviderOperation |Yes |
-
-## Microsoft.DocumentDB/cassandraClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraAudit |CassandraAudit |Yes |
-|CassandraLogs |CassandraLogs |Yes |
-
-## Microsoft.DocumentDB/DatabaseAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraRequests |CassandraRequests |No |
-|ControlPlaneRequests |ControlPlaneRequests |No |
-|DataPlaneRequests |DataPlaneRequests |No |
-|GremlinRequests |GremlinRequests |No |
-|MongoRequests |MongoRequests |No |
-|PartitionKeyRUConsumption |PartitionKeyRUConsumption |No |
-|PartitionKeyStatistics |PartitionKeyStatistics |No |
-|QueryRuntimeStatistics |QueryRuntimeStatistics |No |
-|TableApiRequests |TableApiRequests |Yes |
-
-## Microsoft.EventGrid/domains
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventGrid/partnerNamespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventGrid/partnerTopics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
-
-## Microsoft.EventGrid/systemTopics
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
-
-## Microsoft.EventGrid/topics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventHub/Namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationMetricsLogs |Application Metrics Logs |Yes |
-|ArchiveLogs |Archive Logs |No |
-|AutoScaleLogs |Auto Scale Logs |No |
-|CustomerManagedKeyUserLogs |Customer Managed Key Logs |No |
-|EventHubVNetConnectionEvent |VNet/IP Filtering Connection Logs |No |
-|KafkaCoordinatorLogs |Kafka Coordinator Logs |No |
-|KafkaUserErrorLogs |Kafka User Error Logs |No |
-|OperationalLogs |Operational Logs |No |
-|RuntimeAuditLogs |Runtime Audit Logs |Yes |
-
-## Microsoft.HealthcareApis/services
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |No |
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/analyticsconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes |
-
-## Microsoft.HealthcareApis/workspaces/dicomservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |Yes |
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/fhirservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |FHIR Audit logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/iotconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## microsoft.insights/autoscalesettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AutoscaleEvaluations |Autoscale Evaluations |No |
-|AutoscaleScaleActions |Autoscale Scale Actions |No |
-
-## microsoft.insights/components
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |No |
-|AppBrowserTimings |Browser timings |No |
-|AppDependencies |Dependencies |No |
-|AppEvents |Events |No |
-|AppExceptions |Exceptions |No |
-|AppMetrics |Metrics |No |
-|AppPageViews |Page views |No |
-|AppPerformanceCounters |Performance counters |No |
-|AppRequests |Requests |No |
-|AppSystemEvents |System events |No |
-|AppTraces |Traces |No |
-
-## Microsoft.Insights/datacollectionrules
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LogErrors |Log Errors |Yes |
-|LogTroubleshooting |Log Troubleshooting |Yes |
-
-## microsoft.keyvault/managedhsms
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Event |No |
-
-## Microsoft.KeyVault/vaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Logs |No |
-|AzurePolicyEvaluationDetails |Azure Policy Evaluation Details |Yes |
-
-## Microsoft.Kusto/clusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command |Command |No |
-|FailedIngestion |Failed ingestion |No |
-|IngestionBatching |Ingestion batching |No |
-|Journal |Journal |Yes |
-|Query |Query |No |
-|SucceededIngestion |Succeeded ingestion |No |
-|TableDetails |Table details |No |
-|TableUsageStatistics |Table usage statistics |No |
-
-## microsoft.loadtestservice/loadtests
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationLogs |Azure Load Testing Operations |Yes |
-
-## Microsoft.Logic/IntegrationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|IntegrationAccountTrackingEvents |Integration Account track events |No |
-
-## Microsoft.Logic/Workflows
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WorkflowRuntime |Workflow runtime diagnostic events |No |
-
-## Microsoft.MachineLearningServices/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
-|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
-
-## Microsoft.MachineLearningServices/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlComputeClusterEvent |AmlComputeClusterEvent |No |
-|AmlComputeClusterNodeEvent |AmlComputeClusterNodeEvent |Yes |
-|AmlComputeCpuGpuUtilization |AmlComputeCpuGpuUtilization |No |
-|AmlComputeJobEvent |AmlComputeJobEvent |No |
-|AmlRunStatusChangedEvent |AmlRunStatusChangedEvent |No |
-|ComputeInstanceEvent |ComputeInstanceEvent |Yes |
-|DataLabelChangeEvent |DataLabelChangeEvent |Yes |
-|DataLabelReadEvent |DataLabelReadEvent |Yes |
-|DataSetChangeEvent |DataSetChangeEvent |Yes |
-|DataSetReadEvent |DataSetReadEvent |Yes |
-|DataStoreChangeEvent |DataStoreChangeEvent |Yes |
-|DataStoreReadEvent |DataStoreReadEvent |Yes |
-|DeploymentEventACI |DeploymentEventACI |Yes |
-|DeploymentEventAKS |DeploymentEventAKS |Yes |
-|DeploymentReadEvent |DeploymentReadEvent |Yes |
-|EnvironmentChangeEvent |EnvironmentChangeEvent |Yes |
-|EnvironmentReadEvent |EnvironmentReadEvent |Yes |
-|InferencingOperationACI |InferencingOperationACI |Yes |
-|InferencingOperationAKS |InferencingOperationAKS |Yes |
-|ModelsActionEvent |ModelsActionEvent |Yes |
-|ModelsChangeEvent |ModelsChangeEvent |Yes |
-|ModelsReadEvent |ModelsReadEvent |Yes |
-|PipelineChangeEvent |PipelineChangeEvent |Yes |
-|PipelineReadEvent |PipelineReadEvent |Yes |
-|RunEvent |RunEvent |Yes |
-|RunReadEvent |RunReadEvent |Yes |
-
-## Microsoft.MachineLearningServices/workspaces/onlineEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
-|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
-|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
-
-## Microsoft.ManagedNetworkFabric/networkDevices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |Yes |
-|AppBrowserTimings |Browser timings |Yes |
-
-## Microsoft.Media/mediaservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|KeyDeliveryRequests |Key Delivery Requests |No |
-|MediaAccount |Media Account Health Status |Yes |
-
-## Microsoft.Media/mediaservices/liveEvents
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LiveEventState |Live Event Operations |Yes |
-
-## Microsoft.Media/mediaservices/streamingEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StreamingEndpointRequests |Streaming Endpoint Requests |Yes |
-
-## Microsoft.Media/videoanalyzers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |Yes |
-|Diagnostics |Diagnostics Logs |Yes |
-|Operational |Operational Logs |Yes |
-
-## Microsoft.NetApp/netAppAccounts/capacityPools
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Capacity Pool Autoscaled |Yes |
-
-## Microsoft.NetApp/netAppAccounts/capacityPools/volumes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ANFFileAccess |ANF File Access |Yes |
-
-## Microsoft.Network/applicationgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationGatewayAccessLog |Application Gateway Access Log |No |
-|ApplicationGatewayFirewallLog |Application Gateway Firewall Log |No |
-|ApplicationGatewayPerformanceLog |Application Gateway Performance Log |No |
-
-## Microsoft.Network/azureFirewalls
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AZFWApplicationRule |Azure Firewall Application Rule |Yes |
-|AZFWApplicationRuleAggregation |Azure Firewall Network Rule Aggregation (Policy Analytics) |Yes |
-|AZFWDnsQuery |Azure Firewall DNS query |Yes |
-|AZFWFatFlow |Azure Firewall Fat Flow Log |Yes |
-|AZFWFlowTrace |Azure Firewall Flow Trace Log |Yes |
-|AZFWFqdnResolveFailure |Azure Firewall FQDN Resolution Failure |Yes |
-|AZFWIdpsSignature |Azure Firewall IDPS Signature |Yes |
-|AZFWNatRule |Azure Firewall Nat Rule |Yes |
-|AZFWNatRuleAggregation |Azure Firewall Nat Rule Aggregation (Policy Analytics) |Yes |
-|AZFWNetworkRule |Azure Firewall Network Rule |Yes |
-|AZFWNetworkRuleAggregation |Azure Firewall Application Rule Aggregation (Policy Analytics) |Yes |
-|AZFWThreatIntel |Azure Firewall Threat Intelligence |Yes |
-|AzureFirewallApplicationRule |Azure Firewall Application Rule (Legacy Azure Diagnostics) |No |
-|AzureFirewallDnsProxy |Azure Firewall DNS Proxy (Legacy Azure Diagnostics) |No |
-|AzureFirewallNetworkRule |Azure Firewall Network Rule (Legacy Azure Diagnostics) |No |
-
-## microsoft.network/bastionHosts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BastionAuditLogs |Bastion Audit Logs |No |
-
-## Microsoft.Network/expressRouteCircuits
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PeeringRouteLog |Peering Route Table Logs |No |
-
-## Microsoft.Network/frontdoors
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|FrontdoorAccessLog |Frontdoor Access Log |No |
-|FrontdoorWebApplicationFirewallLog |Frontdoor Web Application Firewall Log |No |
-
-## Microsoft.Network/loadBalancers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LoadBalancerAlertEvent |Load Balancer Alert Events |No |
-|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
-
-## Microsoft.Network/networkManagers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
-
-## Microsoft.Network/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkSecurityGroupEvent |Network Security Group Event |No |
-|NetworkSecurityGroupFlowEvent |Network Security Group Rule Flow Event |No |
-|NetworkSecurityGroupRuleCounter |Network Security Group Rule Counter |No |
-
-## Microsoft.Network/networkSecurityPerimeters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
-|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
-|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
-|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
-|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
-|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes |
-|NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes |
-|NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
-|NspPublicInboundResourceRulesAllowed |Public inbound access allowed by PaaS resource rules. |Yes |
-|NspPublicInboundResourceRulesDenied |Public inbound access denied by PaaS resource rules. |Yes |
-|NspPublicOutboundPerimeterRulesAllowed |Public outbound access allowed by NSP access rules. |Yes |
-|NspPublicOutboundPerimeterRulesDenied |Public outbound access denied by NSP access rules. |Yes |
-|NspPublicOutboundResourceRulesAllowed |Public outbound access allowed by PaaS resource rules. |Yes |
-|NspPublicOutboundResourceRulesDenied |Public outbound access denied by PaaS resource rules |Yes |
-
-## Microsoft.Network/networkSecurityPerimeters/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NSPInboundAccessAllowed |NSP Inbound Access Allowed. |Yes |
-|NSPInboundAccessDenied |NSP Inbound Access Denied. |Yes |
-|NSPOutboundAccessAllowed |NSP Outbound Access Allowed. |Yes |
-|NSPOutboundAccessDenied |NSP Outbound Access Denied. |Yes |
-
-## microsoft.network/p2svpngateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|P2SDiagnosticLog |P2S Diagnostic Logs |No |
-
-## Microsoft.Network/publicIPAddresses
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DDoSMitigationFlowLogs |Flow logs of DDoS mitigation decisions |No |
-|DDoSMitigationReports |Reports of DDoS mitigations |No |
-|DDoSProtectionNotifications |DDoS protection notifications |No |
-
-## Microsoft.Network/trafficManagerProfiles
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ProbeHealthStatusEvents |Traffic Manager Probe Health Results Event |No |
-
-## microsoft.network/virtualnetworkgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|