Updates from: 04/08/2022 01:11:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/age-gating.md
Previously updated : 08/24/2021 Last updated : 04/07/2022 zone_pivot_groups: b2c-policy-type
When you sign-in as a minor, you should see the following error message: *Unfort
## Enable age gating in your custom policy
-1. Get the example of an age gating policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies).
+1. Get the example of an age gating policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/age-gating).
1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`. 1. Upload the policy files.
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
workspace("AD-B2C-TENANT1").AuditLogs
## Change the data retention period
-Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period).
+Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/cost-logs.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/data-retention-archive.md).
## Next steps
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
A best practice when you troubleshoot problems with password writeback is to ins
| Code | Name or message | Description | | | | | | 6329 | BAIL: MMS(4924) 0x80230619: "A restriction prevents the password from being changed to the current one specified." | This event occurs when the password writeback service attempts to set a password on your local directory that doesn't meet the password age, history, complexity, or filtering requirements of the domain. <br> <br> If you have a minimum password age and have recently changed the password within that window of time, you're not able to change the password again until it reaches the specified age in your domain. For testing purposes, the minimum age should be set to 0. <br> <br> If you have password history requirements enabled, then you must select a password that hasn't been used in the last *N* times, where *N* is the password history setting. If you do select a password that has been used in the last *N* times, then you see a failure in this case. For testing purposes, the password history should be set to 0. <br> <br> If you have password complexity requirements, all of them are enforced when the user attempts to change or reset a password. <br> <br> If you have password filters enabled and a user selects a password that doesn't meet the filtering criteria, then the reset or change operation fails. |
-| 6329 | MMS(3040): admaexport.cpp(2837): The server doesn't contain the LDAP password policy control. | This problem occurs if LDAP_SERVER_POLICY_HINTS_OID control (1.2.840.113556.1.4.2066) isn't enabled on the DCs. To use the password writeback feature, you must enable the control. To do so, the DCs must be on Windows Server 2008R2 or later. |
+| 6329 | MMS(3040): admaexport.cpp(2837): The server doesn't contain the LDAP password policy control. | This problem occurs if LDAP_SERVER_POLICY_HINTS_OID control (1.2.840.113556.1.4.2066) isn't enabled on the DCs. To use the password writeback feature, you must enable the control. To do so, the DCs must be on Windows Server 2016 or later. |
| HR 8023042 | Synchronization Engine returned an error hr=80230402, message=An attempt to get an object failed because there are duplicated entries with the same anchor. | This error occurs when the same user ID is enabled in multiple domains. An example is if you're syncing account and resource forests and have the same user ID present and enabled in each forest. <br> <br> This error can also occur if you use a non-unique anchor attribute, like an alias or UPN, and two users share that same anchor attribute. <br> <br> To resolve this problem, ensure that you don't have any duplicated users within your domains and that you use a unique anchor attribute for each user. | ### If the source of the event is PasswordResetService
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 03/31/2022 Last updated : 04/07/2022
For just-in-time (JIT) redemptions, where redemption is through a tenanted appli
## Consent experience for the guest
-When a guest signs in to access resources in a partner organization for the first time, they're guided through the following pages.
+When a guest signs in to a resource in a partner organization for the first time, they're presented with the following consent experience. These consent pages are shown to the guest only after sign-in, and they aren't displayed at all if the user has already accepted them.
1. The guest reviews the **Review permissions** page describing the inviting organization's privacy statement. A user must **Accept** the use of their information in accordance to the inviting organization's privacy policies to continue.
When a guest signs in to access resources in a partner organization for the firs
![Screenshot showing the Apps access panel](media/redemption-experience/myapps.png)
-> [!NOTE]
-> The consent experience appears only after the user signs in, and not before. There are some scenarios where the consent experience will not be displayed to the user, for example:
-> - The user already accepted the consent experience
-> - The admin [grants tenant-wide admin consent to an application](../manage-apps/grant-admin-consent.md)
- In your directory, the guest's **Invitation accepted** value changes to **Yes**. If an MSA was created, the guestΓÇÖs **Source** shows **Microsoft Account**. For more information about guest user account properties, see [Properties of an Azure AD B2B collaboration user](user-properties.md). If you see an error that requires admin consent while accessing an application, see [how to grant admin consent to apps](../develop/v2-admin-consent.md).
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
The following table contains estimated costs per month for a basic Event Hub in
-To review costs related to managing the Azure Monitor logs, see [Manage cost by controlling data volume and retention in Azure Monitor logs](../../azure-monitor/logs/manage-cost-storage.md).
+To review costs related to managing the Azure Monitor logs, see [Azure Monitor Logs pricing details](../../azure-monitor/logs/cost-logs.md).
## Frequently asked questions
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 03/17/2022 Last updated : 04/03/2022
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e | > | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 |
+> | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 |
> | [Windows 365 Administrator](#windows-365-administrator) | Can provision and manage all aspects of Cloud PCs. | 11451d60-acb2-45eb-a7d6-43d0f0125c13 | > | [Windows Update Deployment Administrator](#windows-update-deployment-administrator) | Can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. | 32696413-001a-46ae-978c-ce0f6b3620d2 |
Users with this role have all permissions in the Azure Information Protection se
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
User can create and manage policy keys and secrets for token encryption, token s
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and update all properties of authorization policies |
+> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C |
## B2C IEF Policy Administrator
Users in this role have the ability to create, read, update, and delete all cust
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cTrustFrameworkPolicy/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C |
+> | microsoft.directory/b2cTrustFrameworkPolicy/allProperties/allTasks | Read and configure custom policies in Azure Active Directory B2C |
## Billing Administrator
Users in this role can enable, disable, and delete devices in Azure AD and read
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD |
In | Can do
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users with this role have the ability to manage Azure Active Directory Condition
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
## Customer LockBox Access Approver
Users in this role can manage the Desktop Analytics service. This includes the a
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics |
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/applications/tag/update | Update tags of applications |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD | > | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD |
Users with this role can create and manage user flows (also called "built-in" po
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cUserFlow/allProperties/allTasks | Read and configure user attributes in Azure Active Directory B2C |
+> | microsoft.directory/b2cUserFlow/allProperties/allTasks | Read and configure user flow in Azure Active Directory B2C |
## External ID User Flow Attribute Administrator
Users with this role add or delete custom attributes available to all user flows
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cUserAttribute/allProperties/allTasks | Read and configure custom policies in Azure Active Directory B2C |
+> | microsoft.directory/b2cUserAttribute/allProperties/allTasks | Read and configure user attribute in Azure Active Directory B2C |
## External Identity Provider Administrator
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users | > | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/connectors/create | Create application proxy connectors |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties | > | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/allTasks | Manage all aspects of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/cloudAppSecurity/allProperties/read | Read all properties for Cloud app security | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies | > | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/read | Read all properties of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Users with this role have global permissions to manage settings within Microsoft
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can add, remove, and update license assignments on users, gro
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/reprocessLicenseAssignment | Reprocess license assignments for group-based licensing | > | microsoft.directory/users/assignLicense | Manage user licenses |
Users with this role can manage role assignments in Azure Active Directory, as w
> | microsoft.directory/accessReviews/definitions.groupsAssignableToRoles/delete | Delete access reviews for membership in groups that are assignable to Azure AD roles | > | microsoft.directory/accessReviews/definitions.groups/allProperties/read | Read all properties of access reviews for membership in Security and Microsoft 365 groups, including role-assignable groups. | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups | > | microsoft.directory/groupsAssignableToRoles/delete | Delete role-assignable groups |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | | | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/servicePrincipals/policies/update | Update policies of service principals |
Users with this role can manage alerts and have global read-only access on secur
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Identity Protection Center | Read all security reports and settings information
> | | | > | microsoft.directory/accessReviews/definitions/allProperties/read | Read all properties of access reviews of all reviewable resources in Azure AD | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/groups/hiddenMembers/read | Read hidden members of Security groups and Microsoft 365 groups, including role-assignable groups | > | microsoft.directory/groups.unified/create | Create Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups, excluding role-assignable groups |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
## Teams Communications Administrator
Users in this role can manage aspects of the Microsoft Teams workload related to
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users with this role can create users, and manage all aspects of users with some
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Virtual Visits Administrator
+
+Users with this role can do the following tasks:
+
+- Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams EHR connector
+- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and PowerBI
+- View features and settings in the Microsoft 365 admin center, but can't edit any settings
+
+Virtual Visits are a simple way to schedule and manage online and video appointments for staff and attendees. For example, usage reporting can show how sending SMS text messages before appointments can reduce the number of people who don't show up for appointments.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups.
active-directory Adobe Identity Management Provisioning Oidc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md
+
+ Title: 'Tutorial: Configure Adobe Identity Management (OIDC) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Adobe Identity Management (OIDC).
+
+documentationcenter: ''
+
+writer: twimmers
++
+ms.assetid: baa54168-d23a-49d8-94d1-28476138cd90
+++
+ na
+ Last updated : 04/06/2022+++
+# Tutorial: Configure Adobe Identity Management (OIDC) for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Adobe Identity Management (OIDC) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to Adobe Identity Management (OIDC) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Adobe Identity Management (OIDC)
+> * Disable users in Adobe Identity Management (OIDC) when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Adobe Identity Management (OIDC)
+> * Provision groups and group memberships in Adobe Identity Management (OIDC)
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Adobe Identity Management (OIDC) (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A federated directory in the [Adobe Admin Console](https://adminconsole.adobe.com/) with verified domains.
+* Review the [adobe documentation](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/add-azure-sync.ug.html) on user provisioning
+
+> [!NOTE]
+> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure Portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Adobe Identity Management (OIDC)](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Adobe Identity Management (OIDC) to support provisioning with Azure AD
+
+1. Login to [Adobe Admin Console](https://adminconsole.adobe.com/). Navigate to **Settings > Directory Details > Sync**.
+
+1. Click **Add Sync**.
+
+ ![Add](media/adobe-identity-management-provisioning-tutorial/add-sync.png)
+
+1. Select **Sync users from Microsoft Azure** and click **Next**.
+
+ ![Screenshot that shows 'Sync users from Microsoft Azure Active Directory' selected.](media/adobe-identity-management-provisioning-tutorial/sync-users.png)
+
+1. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management (OIDC) application in the Azure portal.
+
+ ![Sync](media/adobe-identity-management-provisioning-tutorial/token.png)
+
+## Step 3. Add Adobe Identity Management (OIDC) from the Azure AD application gallery
+
+Add Adobe Identity Management (OIDC) from the Azure AD application gallery to start managing provisioning to Adobe Identity Management (OIDC). If you have previously setup Adobe Identity Management (OIDC) for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Adobe Identity Management (OIDC)
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Adobe Identity Management (OIDC) in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Adobe Identity Management (OIDC)**.
+
+ ![The Adobe Identity Management (OIDC) link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Adobe Identity Management (OIDC) Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management (OIDC). If the connection fails, ensure your Adobe Identity Management (OIDC) account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management (OIDC)**.
+
+1. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management (OIDC) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management (OIDC) for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management (OIDC) API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management (OIDC)
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |addresses[type eq "work"].country|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management (OIDC)**.
+
+1. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management (OIDC) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management (OIDC) for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management (OIDC)
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Adobe Identity Management (OIDC), change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Adobe Identity Management (OIDC) by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Adobe Identity Man
## Capabilities supported > [!div class="checklist"] > * Create users in Adobe Identity Management
-> * Remove users in Adobe Identity Management when they do not require access anymore
+> * Disable users in Adobe Identity Management when they do not require access anymore
> * Keep user attributes synchronized between Azure AD and Adobe Identity Management > * Provision groups and group memberships in Adobe Identity Management
-> * Single sign-on to Adobe Identity Management (recommended)
+> * [Single sign-on](adobe-identity-management-tutorial.md) to Adobe Identity Management (recommended)
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Adobe Identity Management to support provisioning with Azure AD 1. Login to [Adobe Admin Console](https://adminconsole.adobe.com/). Navigate to **Settings > Directory Details > Sync**.
-2. Click **Add Sync**.
+1. Click **Add Sync**.
![Add](media/adobe-identity-management-provisioning-tutorial/add-sync.png)
-3. Select **Sync users from Microsoft Azure** and click **Next**.
+1. Select **Sync users from Microsoft Azure** and click **Next**.
![Screenshot that shows 'Sync users from Microsoft Azure Active Directory' selected.](media/adobe-identity-management-provisioning-tutorial/sync-users.png)
-4. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
+1. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
![Sync](media/adobe-identity-management-provisioning-tutorial/token.png)
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Adobe Identity Management**.
+1. In the applications list, select **Adobe Identity Management**.
![The Adobe Identity Management link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Provisioning tab](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
-9. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |userName|String|
- |emails[type eq "work"].value|String|
- |active|Boolean|
- |addresses[type eq "work"].country|String|
- |name.givenName|String|
- |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |addresses[type eq "work"].country|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
-11. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |displayName|String|
- |members|Reference|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|userName|String|&check;|&check; |externalId|String|&check;|&check; |userType|String||&check;
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
|active|Boolean|| |title|String||&check; |emails[type eq "work"].value|String||&check;
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| |active|Boolean|| |title|String||
- |emails[type eq "work"].value|String||
+ |emails[type eq "work"].value|String||&check;
|name.givenName|String||&check; |name.familyName|String||&check; |phoneNumbers[type eq "work"].value|String||
Once you've configured provisioning, use the following resources to monitor your
* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Change Log
-03/23/2022 - Added support for **Group Provisioning**.
+* 03/23/2022 - Added support for **Group Provisioning**.
+* 04/06/2022 - **emails[type eq "work"].value** is made a required attribute.
## More resources
active-directory Kantegassoforjira Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforjira-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Kantega SSO for JIRA | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Kantega SSO for JIRA.
+ Title: 'Tutorial: Integrate Azure Active Directory with Kantega SSO for JIRA | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Jira using Kantega SSO.
Previously updated : 05/27/2021 Last updated : 04/04/2022
-# Tutorial: Azure Active Directory integration with Kantega SSO for JIRA
+# Tutorial: Integrate Azure Active Directory with Kantega SSO for JIRA
-In this tutorial, you'll learn how to integrate Kantega SSO for JIRA with Azure Active Directory (Azure AD). When you integrate Kantega SSO for JIRA with Azure AD, you can:
+This tutorial will walk you through the steps of configuring single sign-on for your Azure AD users in Jira. To achieve this, we will be using the Kantega SSO app. Using this configuration, you will be able to:
-* Control in Azure AD who has access to Kantega SSO for JIRA.
-* Enable your users to be automatically signed-in to Kantega SSO for JIRA with their Azure AD accounts.
+* Control which users have Jira access from Azure AD.
+* Automatically sign in to Jira when you have an active Azure AD session.
* Manage your accounts in one central location - the Azure portal.
-## Prerequisites
+Read more on the official [Kantega SSO documentation](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/895844483/Azure+AD).
-To configure Azure AD integration with Kantega SSO for JIRA, you need the following items:
+## Prerequisites
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+To follow this tutorial, you need:
-* Kantega SSO for JIRA single sign-on enabled subscription.
+* An active Azure AD subscription. You can set up a [free account](https://azure.microsoft.com/free/).
+* A Jira Data Center instance. You can [try it for free](https://www.atlassian.com/software/jira/download/data-center).
+* Kantega SSO app for Jira from Atlassian Marketplace. You can [try it for free](https://marketplace.atlassian.com/apps/1211923/k-sso-saml-kerberos-openid-oidc-oauth-for-jira?tab=overview&hosting=datacenter).
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you will configure and test single sign-on with Azure AD in a Jira test environment.
-* Kantega SSO for JIRA supports **SP and IDP** initiated SSO.
+* Kantega SSO supports **SAML and OIDC**.
+* Kantega SSO supports **SP and IDP** initiated SSO.
+* Kantega SSO supports Automated user provisioning and deprovisioning (recommended).
+* Kantega SSO supports Just-in-Time user provisioning.
## Add Kantega SSO for JIRA from the gallery
To configure the integration of Kantega SSO for JIRA into Azure AD, you need to
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
+1. To add a new application, select **New application**.
1. In the **Add from the gallery** section, type **Kantega SSO for JIRA** in the search box.
-1. Select **Kantega SSO for JIRA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Select **Kantega SSO for JIRA** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
## Configure and test Azure AD SSO for Kantega SSO for JIRA
To configure and test Azure AD SSO with Kantega SSO for JIRA, perform the follow
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Kantega SSO for JIRA SSO](#configure-kantega-sso-for-jira-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Kantega SSO for JIRA test user](#create-kantega-sso-for-jira-test-user)** - to have a counterpart of B.Simon in Kantega SSO for JIRA that is linked to the Azure AD representation of user.
+1. **[Configure Kantega SSO for JIRA SSO](#configure-kantega-sso-for-jira-sso)** - to configure the single sign-on settings on the application side.
+ 1. **[Create Kantega SSO for JIRA test user](#create-kantega-sso-for-jira-test-user)** - to have a counterpart of B.Simon in Kantega SSO for JIRA linked to the Azure AD representation of the user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<server-base-url>/plugins/servlet/no.kantega.saml/sp/<UNIQUE_ID>/login` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. These values are received during the configuration of Jira plugin, which is explained later in the tutorial.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. These values are received during the configuration of the Jira plugin.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
1. In the **User** properties, follow these steps: 1. In the **Name** field, enter `B.Simon`. 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select the **Show password** check box, and then write down the displayed value in the **Password** box.
1. Click **Create**. ### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you expect a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see the "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Kantega SSO for JIRA SSO
-1. In a different web browser window, sign in to your JIRA on-premises server as an administrator.
-
-1. However on cog and click the **Add-ons**.
-
- ![Screenshot that shows the "Cog" icon selected and "Add-ons" selected from the drop-down.](./media/kantegassoforjira-tutorial/settings.png)
-
-1. Under Add-ons tab section, click **Find new add-ons**. Search **Kantega SSO for JIRA (SAML & Kerberos)** and click **Install** button to install the new SAML plugin.
-
- ![Screenshot that shows the "Find new Add-ons" section with "Kantego S S O for JIRA (S A M L & Kerberos)" in the search box and the "Install" button selected.](./media/kantegassoforjira-tutorial/install-tab.png)
-
-1. The plugin installation starts.
-
- ![Screenshot that shows the plugin "Installing" dialog.](./media/kantegassoforjira-tutorial/installation.png)
-
-1. Once the installation is complete. Click **Close**.
-
- ![Screenshot that shows the "Installed and ready to go!" dialog with the "Close" action selected.](./media/kantegassoforjira-tutorial/close-tab.png)
-
-1. Click **Manage**.
-
- ![Screenshot that shows the "Kantega S S O" app page with the "Manage" button selected.](./media/kantegassoforjira-tutorial/manage-tab.png)
-
-1. New plugin is listed under **INTEGRATIONS**. Click **Configure** to configure the new plugin.
-
- ![Screenshot that shows "INTEGRATIONS" in the left-side navigation menu highlighted and the "Configure" button selected in the "Manage add-ons" section.](./media/kantegassoforjira-tutorial/integration.png)
-
-1. In the **SAML** section. Select **Azure Active Directory (Azure AD)** from the **Add identity provider** dropdown.
-
- ![Screenshot that shows the "Add identity provider" drop-down with "Azure Active Directory (Azure A D)" selected.](./media/kantegassoforjira-tutorial/identity-provider.png)
-
-1. Select subscription level as **Basic**.
-
- ![Screenshot that shows the "Preparing Azure A D" section with "Basic" selected.](./media/kantegassoforjira-tutorial/basic-tab.png)
-
-1. On the **App properties** section, perform following steps:
-
- ![Screenshot that shows the "App properties" section with the "App I D U R L" textbox and copy button highlighted, and the "Next" button selected.](./media/kantegassoforjira-tutorial/properties.png)
-
- 1. Copy the **App ID URI** value and use it as **Identifier, Reply URL, and Sign-On URL** on the **Basic SAML Configuration** section in Azure portal.
-
- 1. Click **Next**.
-
-1. On the **Metadata import** section, perform following steps:
-
- ![Screenshot that shows the "Metadata import" section with "Metadata file on my computer" selected.](./media/kantegassoforjira-tutorial/metadata.png)
-
- 1. Select **Metadata file on my computer**, and upload metadata file, which you have downloaded from Azure portal.
-
- 1. Click **Next**.
-
-1. On the **Name and SSO location** section, perform following steps:
-
- ![Screenshot that shows the "Name and S S O location" with the "Identity provider name" textbox highlighted, and the "Next" button selected.](./media/kantegassoforjira-tutorial/location.png)
+Kantega SSO can be configured to use either SAML or OIDC as SSO protocol. Choose one of the following guides:
- 1. Add Name of the Identity Provider in **Identity provider name** textbox (e.g Azure AD).
-
- 1. Click **Next**.
-
-1. Verify the Signing certificate and click **Next**.
-
- ![Screenshot that shows the "Signature verification" section with the "Next" button selected.](./media/kantegassoforjira-tutorial/certificate.png)
-
-1. On the **JIRA user accounts** section, perform following steps:
-
- ![Screenshot that shows the "JIRA user accounts" with the "Create users in JIRA's Internal Directory if needed" option highlighted and the "Next" button selected.](./media/kantegassoforjira-tutorial/accounts.png)
-
- 1. Select **Create users in JIRA's internal Directory if needed** and enter the appropriate name of the group for users (can be multiple no. of groups separated by comma).
-
- 1. Click **Next**.
-
-1. Click **Finish**.
-
- ![Screenshot that shows the "Summary" section with teh "Finish" button selected.](./media/kantegassoforjira-tutorial/finish-tab.png)
-
-1. On the **Known domains for Azure AD** section, perform following steps:
-
- ![Configure Single Sign-On](./media/kantegassoforjira-tutorial/save-tab.png)
-
- 1. Select **Known domains** from the left panel of the page.
-
- 2. Enter domain name in the **Known domains** textbox.
-
- 3. Click **Save**.
+* [Kantega SSO setup guide for Azure AD with SAML](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/896696394/Azure+AD+SAML)
+* [Kantega SSO setup guide for Azure AD with OIDC](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/896598077/Azure+AD+OIDC)
### Create Kantega SSO for JIRA test user
-To enable Azure AD users to sign in to JIRA, they must be provisioned into JIRA. In Kantega SSO for JIRA, provisioning is a manual task.
-
-**To provision a user account, perform the following steps:**
-
-1. Sign in to your JIRA on-premises server as an administrator.
-
-1. Hover on cog and click the **User management**.
-
- ![Screenshot that shows the "Cog" icon selected, and "User management" selected from the drop-down.](./media/kantegassoforjira-tutorial/user.png)
-
-1. Under **User management** tab section, click **Create user**.
-
- ![Screenshot that shows the "User management" section with the "Create user" button selected.](./media/kantegassoforjira-tutorial/create-user.png)
-
-1. On the **ΓÇ£Create new userΓÇ¥** dialog page, perform the following steps:
-
- ![Add Employee](./media/kantegassoforjira-tutorial/new-user.png)
-
- 1. In the **Email address** textbox, type the email address of user like Brittasimon@contoso.com.
-
- 2. In the **Full Name** textbox, type full name of the user like Britta Simon.
-
- 3. In the **Username** textbox, type the email of user like Brittasimon@contoso.com.
-
- 4. In the **Password** textbox, type the password of user.
-
- 5. Click **Create user**.
+To enable Azure AD users to sign in to Kantega SSO for JIRA, you must provision them. The application supports Just-in-Time user provisioning, automatic user provisioning using SCIM, or you can set up users manually. Read more about the [different provisioning options](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/1769694/User+provisioning).
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with the following options.
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Kantega SSO for JIRA Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in the Azure portal. This will redirect to Kantega SSO for JIRA Sign-on URL, where you can initiate the login flow.
-* Go to Kantega SSO for JIRA Sign-on URL directly and initiate the login flow from there.
+* Go to Kantega SSO for JIRA Sign-on URL directly and initiate the login flow.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Kantega SSO for JIRA for which you set up the SSO.
+* Click on **Test this application** in the Azure portal, and you should be automatically signed in to the Kantega SSO for JIRA, for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Kantega SSO for JIRA tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kantega SSO for JIRA for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Kantega SSO for JIRA tile in the My Apps, you will be redirected to the application sign-on page for initiating the login flow if configured in SP mode. If configured in IDP mode, you should be automatically signed in to the Kantega SSO for JIRA, for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Kantega SSO for JIRA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Kantega SSO for JIRA, you can enforce session control, which protects the exfiltration and infiltration of your organization's sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Klaxoon and Azure
> * Disable users in Klaxoon when they do not require access anymore. > * Keep user attributes synchronized between Azure AD and Klaxoon. > * Provide licenses to users in Klaxoon based on Azure AD Groups.
-> * [Single sign-on](klaxoon-saml-tutorial.md) to Klaxoon (recommended).
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Klaxoon (recommended).
## Prerequisites
active-directory Knowbe4 Security Awareness Training Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure KnowBe4 Security Awareness Training for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to KnowBe4 Security Awareness Training.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: e71f7de4-33d0-46cc-85c9-29f24c3e1a25
+++
+ na
+ Last updated : 04/06/2022+++
+# Tutorial: Configure KnowBe4 Security Awareness Training for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both KnowBe4 Security Awareness Training and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [KnowBe4 Security Awareness Training](https://www.knowbe4.com/) using the Azure AD provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in KnowBe4 Security Awareness Training.
+> * Remove users in KnowBe4 Security Awareness Training when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and KnowBe4 Security Awareness Training.
+> * Provision groups and group memberships in KnowBe4 Security Awareness Training.
+> * [Single sign-on](knowbe4-tutorial.md) to KnowBe4 Security Awareness Training (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application administrator, Cloud Application administrator, Application Owner, or Global administrator).
+* A user account in KnowBe4 Security Awareness Training with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and KnowBe4 Security Awareness Training](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure KnowBe4 Security Awareness Training to support provisioning with Azure AD
+Follow the steps below to configure your SCIM settings in the console.
+>[!NOTE]
+>If you are switching from ADI to SCIM, please note that if you are using alias email addresses, our integration with SCIM does not support that connection, so this information will be removed once you disable **Test Mode** and a sync runs.
+
+1. From your KnowBe4 console, click your email address in the top right corner and select **Account Settings**.
+1. Navigate to the **User Management > User Provisioning** section of your settings.
+1. Select **Enable User Provisioning (User Syncing)** to display more provisioning settings.
+
+ ![User Provisioning (User Syncing)](media/knowbe4-security-awareness-training-provisioning-tutorial\user-sync.png)
+
+1. By default, the toggle will be set to **ADI**. Click the **SCIM** toggle to begin setting up.
+1. Expand your SCIM settings by clicking **+ SCIM Settings**.
+
+ ![Tenant Url](media/knowbe4-security-awareness-training-provisioning-tutorial\tenant-url.png)
+
+1. Click **Generate SCIM Token**. This will open a new window with your token ID. Copy this ID and save it to a place that you can easily access later. It is important that you save this token because once you close this window, you cannot view the token again. Once youΓÇÖve saved the information, click **OK** to close the window.
+
+ >[!NOTE]
+ >Once your SCIM token is generated, this button will change to the **Regenerate SCIM Token** button. See the **Troubleshooting Tips** section of this article for more information.
+
+ >[!NOTE]
+ >Your identity provider will need the token (step 5) and the tenant ID (step 6) in order to establish a connection with KnowBe4. Make sure that you save this information so it is readily available when you are ready to set up the connection with your identity provider.
+
+1. Copy the Tenant URL and save it to a place that you can easily access later.
+1. Make sure that the Test Mode option is selected.
+
+ ![Tenant Mode](media/knowbe4-security-awareness-training-provisioning-tutorial\test-mode.png)
+
+ >[!NOTE]
+ >We recommend keeping **Test Mode** enabled until youΓÇÖve configured the connection between KnowBe4 and your identity provider and have run a successful sync. Test Mode is used to generate a report of what will happen when SCIM is enabled. This means no changes are made to your console so you can configure your setup without worrying about changes to your console. When you are ready, you can disable **Test Mode** from your **Account Settings** to enable syncing.If you are switching from ADI to SCIM, **Test Mode** will be enabled automatically after you save your **Account Settings**.
+
+1. Scroll down to the bottom of the **Account Settings** page and click **Save Changes**.
+Now that you have enabled SCIM in your KnowBe4 account, you are ready to finalize the connection with your identity provider. See one of the articles below to find instructions on configuring SCIM for the identity provider that you are using.
+
+## Step 3. Add KnowBe4 Security Awareness Training from the Azure AD application gallery
+
+Add KnowBe4 Security Awareness Training from the Azure AD application gallery to start managing provisioning to KnowBe4 Security Awareness Training. If you have previously setup KnowBe4 Security Awareness Training for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to KnowBe4 Security Awareness Training
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in KnowBe4 Security Awareness Training based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for KnowBe4 Security Awareness Training in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **KnowBe4 Security Awareness Training**.
+
+ ![The KnowBe4 Security Awareness Training link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your KnowBe4 Security Awareness Training Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to KnowBe4 Security Awareness Training. If the connection fails, ensure your KnowBe4 Security Awareness Training account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to KnowBe4 Security Awareness Training**.
+
+1. Review the user attributes that are synchronized from Azure AD to KnowBe4 Security Awareness Training in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in KnowBe4 Security Awareness Training for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the KnowBe4 Security Awareness Training API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by KnowBe4 Security Awareness Training|
+ |||||
+ |userName|String|&check;|&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager.value|Reference||
+ |active|Boolean||
+ |title|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+ |displayName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customDate1|DateTime||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customDate2|DateTime|
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField1|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField2|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField3|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField4|String||
+|
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to KnowBe4 Security Awareness Training**.
+
+1. Review the group attributes that are synchronized from Azure AD to KnowBe4 Security Awareness Training in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in KnowBe4 Security Awareness Training for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by KnowBe4 Security Awareness Training|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for KnowBe4 Security Awareness Training, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to KnowBe4 Security Awareness Training by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Step 7. Troubleshooting Tips
+* Once SCIM has been enabled, you'll see three buttons in the SCIM section of your Account Settings that can be used for troubleshooting purposes. For more information on these options, see the list below.
+
+ ![Troubleshooting Tips](media/knowbe4-security-awareness-training-provisioning-tutorial\troubleshoot.png)
+
+ * **Regenerate SCIM token**: Use this button to generate a new SCIM token. This token can only be viewed once, so make sure you save this information before closing the window. The link between your identity providers and your KnowBe4 console will be disabled until you provide the new SCIM token.
+
+ * **Revoke SCIM token**: Use this button to disable your current SCIM token. Identity providers currently using this token will no longer be linked to your KnowBe4 console.
+
+ * **Force Sync Now**: Use this button to manually force a SCIM sync at any time, without requiring a change from your identity provider.
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
For apps using [legacy authentication protocols](../fundamentals/auth-sync-overv
* [Use Azure AD Application Proxy or Secure hybrid partner access](../manage-apps/secure-hybrid-access.md) to provide secure access.
-* Decommission access to apps that are no longer needed, or are not supported (for example, apps added by shadow IT processes). Also have an alarm for
+* Decommission access to apps that are no longer needed, or are not supported (for example, apps added by shadow IT processes).
## Connect devices
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Learn more about using Container insights at [Container insights overview](../az
## Configure monitoring The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor. ### Create Log Analytics workspace
-You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details.
+You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details.
If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data.
The logs for AKS control plane components are implemented in Azure as [resource
You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
-There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
You can now update an AKS cluster currently working with service principals to w
```azurecli-interactive az aks update -g <RGName> -n <AKSName> --enable-managed-identity ```
+> [!NOTE]
+> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you will need to wait till the next VHD is available in order to do the actual update.
+>
+ > [!NOTE] > After updating, your cluster's control plane and addon pods will switch to use managed identity, but kubelet will KEEP USING SERVICE PRINCIPAL until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity. >
app-service Overview Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-zone-redundancy.md
Title: Zone redundancy in App Service Environment
description: Overview of zone redundancy in an App Service Environment. Previously updated : 11/15/2021 Last updated : 04/06/2022
You can deploy App Service Environment across [availability zones](../../availab
You configure zone redundancy when you create your App Service Environment, and all App Service plans created in that App Service Environment will be zone redundant. You can only specify zone redundancy when you're creating a new App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
-When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have autoscale configured, and if it determines that more instances are needed, autoscale also issues a request to App Service to add more instances. Autoscale behavior is independent of App Service platform behavior.
+When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have auto-scale configured, and if it determines that more instances are needed, auto-scale also issues a request to App Service to add more instances. Auto-scale behavior is independent of App Service platform behavior.
There's no guarantee that requests for instances in a zone-down scenario will succeed, because back-filling lost instances occurs on a best effort basis. It's a good idea to scale your App Service plans to account for losing a zone.
-Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include the following: App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
+Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
-When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- one instance in all of the other zones used by the App Service plan.
+
+## In-region data residency
+
+A zone redundant App Service Environment will only store customer data within the region where it has been deployed. Both app content, settings and secrets stored in App Service remain within the region where the zone redundant App Service Environment is deployed.
## Pricing
- There is a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There is no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This charge is for additional Windows I1v2 instances.
+ There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There's no added charge for availability zone support if you've nine or more instances. If you've fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This difference is billed as Windows I1v2 instances.
## Next steps
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md
AzureDiagnostics
* To understand creation and retrieval of output and error messages from runbooks, see [Monitor runbook output](automation-runbook-output-and-messages.md). * To learn more about runbook execution, how to monitor runbook jobs, and other technical details, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To learn more about Azure Monitor logs and data collection sources, see [Collecting Azure storage data in Azure Monitor logs overview](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/manage-cost-storage.md#troubleshooting-why-log-analytics-is-no-longer-collecting-data).
+* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/data-collection-troubleshoot.md).
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/configure-alerts.md
Once you have your alerts configured, you can set up an action group, which is a
* Learn about [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace.
-* Manage [usage and costs with Azure Monitor Logs](../../azure-monitor/logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
+* [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md) describes how to analyze and alert on your data usage.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Change Tracking and Inventory makes use of [Microsoft Defender for Cloud File In
Enabling all features included in Change Tracking and Inventory might cause additional charges. Before proceeding, review [Automation Pricing](https://azure.microsoft.com/pricing/details/automation/) and [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/).
-Change Tracking and Inventory forwards data to Azure Monitor Logs, and this collected data is stored in a Log Analytics workspace. The File Integrity Monitoring (FIM) feature is available only when **Microsoft Defender for servers** is enabled. See Microsoft Defender for Cloud [Pricing](../../security-center/security-center-pricing.md) to learn more. FIM uploads data to the same Log Analytics workspace as the one created to store data from Change Tracking and Inventory. We recommend that you monitor your linked Log Analytics workspace to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Manage usage and cost](../../azure-monitor/logs/manage-cost-storage.md).
+Change Tracking and Inventory forwards data to Azure Monitor Logs, and this collected data is stored in a Log Analytics workspace. The File Integrity Monitoring (FIM) feature is available only when **Microsoft Defender for servers** is enabled. See Microsoft Defender for Cloud [Pricing](../../security-center/security-center-pricing.md) to learn more. FIM uploads data to the same Log Analytics workspace as the one created to store data from Change Tracking and Inventory. We recommend that you monitor your linked Log Analytics workspace to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md).
Machines connected to the Log Analytics workspace use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
The following table shows the tracked item limits per machine for Change Trackin
|Services|250| |Daemons|250|
-The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs).
+The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/logs/usage-estimated-costs.md#Understand your usage and optimize your pricing tier).
### Windows services data
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-alerts.md
Once you have your alerts configured, you can set up an action group, which is a
* Learn about [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace.
-* Manage [usage and costs with Azure Monitor Logs](../../azure-monitor/logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
+* [Azure Monitor best practices - Cost management](../../azure-monitor/best-practices-cost.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Update Management scans managed machines for data using the following rules. It
* Each Linux machine - Update Management does a scan every hour.
-The average data usage by Azure Monitor logs for a machine using Update Management is approximately 25 MB per month. This value is only an approximation and is subject to change, depending on your environment. We recommend that you monitor your environment to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Manage usage and cost](../../azure-monitor/logs/manage-cost-storage.md).
+The average data usage by Azure Monitor logs for a machine using Update Management is approximately 25 MB per month. This value is only an approximation and is subject to change, depending on your environment. We recommend that you monitor your environment to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Azure Monitor Logs pricing details](../../azure-monitor/logs/cost-logs.md).
## Update classifications
azure-arc Create Data Controller Indirect Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md
You can create a data controller using Azure Data Studio through the deployment
- You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to. - You need to [install the client tools](install-client-tools.md) including **Azure Data Studio**, the Azure Data Studio extensions called **Azure Arc** and Azure CLI with the `arcdata` extension. - You need to log in to Azure in Azure Data Studio. To do this: type CTRL/Command + SHIFT + P to open the command text window and type **Azure**. Choose **Azure: Sign in**. In the panel, that comes up click the + icon in the top right to add an Azure account.
+- You need to run `az login` in your local Command Prompt to login to Azure CLI.
## Use the Deployment Wizard to create Azure Arc data controller
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
You can deploy Azure Arc-enabled data services on various types of Kubernetes cl
- OpenShift Container Platform (OCP) > [!IMPORTANT]
-> * The minimum supported version of Kubernetes is v1.19. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
-> * The minimum supported version of OCP is 4.7.
+> * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
+> * The minimum supported version of OCP is 4.8.
> * If you're using Azure Kubernetes Service, your cluster's worker node virtual machine (VM) size should be at least Standard_D8s_v3 and use Premium Disks. > * The cluster should not span multiple availability zones. > * For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
## Enable CLI extensions >[!NOTE]
->The `k8s-configuration` CLI extension has been upgraded to manage either Flux v2 or Flux v1 configurations. Flux v2 is an important upgrade to Flux v1, and eventually GitOps support for Flux v1 will cease. Begin using Flux v2 as soon as possible.
+>The `k8s-configuration` CLI extension has been upgraded to manage either Flux v2 or Flux v1 configurations. Flux v2 is an important upgrade to Flux v1, and eventually Azure will stop supporting GitOps with Flux v1. Begin using Flux v2 as soon as possible.
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
az extension list -o table
Experimental ExtensionType Name Path Preview Version - -- -- -- -- --
-False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.0
-False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.4.1
-False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.0.4
+False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.7
+False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.5.0
+False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.1.0
``` > [!TIP]
False whl k8s-extension C:\Users\somename\.azure\c
## Apply a Flux configuration by using the Azure CLI
-Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [flux2-kustomize-helm-example](https://github.com/fluxcd/flux2-kustomize-helm-example) repository.
+Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository.
In the following example: * The resource group that contains the cluster is `flux-demo-rg`. * The name of the Azure Arc cluster is `flux-demo-arc`. * The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`).
-* The name of the Flux configuration is `gitops-demo`.
-* The namespace for configuration installation is `gitops-demo`.
-* The URL for the public Git repository is `https://github.com/fluxcd/flux2-kustomize-helm-example`.
+* The name of the Flux configuration is `cluster-config`.
+* The namespace for configuration installation is `cluster-config`.
+* The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`.
* The Git repository branch is `main`. * The scope of the configuration is `cluster`. It gives the operators permissions to make changes throughout cluster. * Two kustomizations are specified with names `infra` and `apps`. Each is associated with a path in the repository. * The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) * Set `prune=true` on both kustomizations. This setting assures that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted.
-If the `microsoft.flux` extension isn't already installed in the cluster, it will be installed.
+If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute you can query the configuration again and see the final compliance state.
```console
-az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n gitops-demo --namespace gitops-demo -t connectedClusters --scope cluster -u https://github.com/fluxcd/flux2-kustomize-helm-example --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
+az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n cluster-config --namespace cluster-config -t connectedClusters --scope cluster -u https://github.com/Azure/gitops-flux2-kustomize-helm-mt --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
-Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-Warning! https url is being used without https auth params, ensure the repository url provided is not a private repo
'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes... 'Microsoft.Flux' extension was successfully installed on the cluster
-Creating the flux configuration 'gitops-demo' in the cluster. This may take a few minutes...
+Creating the flux configuration 'cluster-config' in the cluster. This may take a few minutes...
{ "complianceState": "Pending", ... (not shown because of pending status) } ```
-Show the configuration after time to finish reconciliations.
+Show the configuration after allowing time to finish reconciliations.
```console
-az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters
-
-Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters
{
+ "bucket": null,
"complianceState": "Compliant", "configurationProtectedSettings": {}, "errorMessage": "", "gitRepository": {
- "httpsCaFile": null,
+ "httpsCaCert": null,
"httpsUser": null, "localAuthRef": null, "repositoryRef": {
Command group 'k8s-configuration flux' is in preview and under development. Refe
"sshKnownHosts": null, "syncIntervalInSeconds": 600, "timeoutInSeconds": 600,
- "url": "https://github.com/fluxcd/flux2-kustomize-helm-example"
+ "url": "https://github.com/Azure/gitops-flux2-kustomize-helm-mt"
},
- "id": "/subscriptions/REDACTED/resourceGroups/flux-demo-rg/providers/Microsoft.Kubernetes/connectedClusters/flux-demo-arc/providers/Microsoft.KubernetesConfiguration/fluxConfigurations/gitops-demo",
+ "id": "/subscriptions/REDACTED/resourceGroups/flux-demo-rg/providers/Microsoft.Kubernetes/connectedClusters/flux-demo-arc/providers/Microsoft.KubernetesConfiguration/fluxConfigurations/cluster-config",
"kustomizations": { "apps": { "dependsOn": [
- {
- "kustomizationName": "infra"
- }
+ "infra"
], "force": false,
+ "name": "apps",
"path": "./apps/staging", "prune": true, "retryIntervalInSeconds": null,
Command group 'k8s-configuration flux' is in preview and under development. Refe
"timeoutInSeconds": 600 }, "infra": {
- "dependsOn": [],
+ "dependsOn": null,
"force": false,
+ "name": "infra",
"path": "./infrastructure", "prune": true, "retryIntervalInSeconds": null,
Command group 'k8s-configuration flux' is in preview and under development. Refe
"timeoutInSeconds": 600 } },
- "lastSourceSyncedAt": "2021-11-23T22:59:22+00:00",
- "lastSourceSyncedCommitId": "main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
- "name": "gitops-demo",
- "namespace": "gitops-demo",
+ "name": "cluster-config",
+ "namespace": "cluster-config",
"provisioningState": "Succeeded", "repositoryPublicKey": "",
- "resourceGroup": "flux-demo-rg",
+ "resourceGroup": "Flux2-Test-RG-EUS",
"scope": "cluster", "sourceKind": "GitRepository",
+ "sourceSyncedCommitId": "main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
+ "sourceUpdatedAt": "2022-04-06T17:34:03+00:00",
+ "statusUpdatedAt": "2022-04-06T17:44:56.417000+00:00",
"statuses": [ { "appliedBy": null, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "GitRepository",
- "name": "gitops-demo",
- "namespace": "gitops-demo",
+ "name": "cluster-config",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:22+00:00",
- "message": "Fetched revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:33:32+00:00",
+ "message": "Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "GitOperationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
"complianceState": "Compliant", "helmReleaseProperties": null, "kind": "Kustomization",
- "name": "gitops-demo-apps",
- "namespace": "gitops-demo",
+ "name": "cluster-config-apps",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:53+00:00",
- "message": "Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:44:04+00:00",
+ "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "ReconciliationSucceeded", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-apps",
- "namespace": "gitops-demo"
+ "name": "cluster-config-apps",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "podinfo-podinfo",
- "namespace": "flux-system"
+ "name": "cluster-config-podinfo",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "podinfo",
- "namespace": "podinfo",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:54+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:43+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T22:59:54+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:43+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True",
Command group 'k8s-configuration flux' is in preview and under development. Refe
"complianceState": "Compliant", "helmReleaseProperties": null, "kind": "Kustomization",
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo",
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:24+00:00",
- "message": "Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:43:33+00:00",
+ "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "ReconciliationSucceeded", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "HelmRepository", "name": "bitnami",
- "namespace": "flux-system",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:30+00:00",
- "message": "Fetched revision: 75dd8746b22e569460eb3b453b0ae22941c680b7",
+ "lastTransitionTime": "2022-04-06T17:33:36+00:00",
+ "message": "Fetched revision: 46a41610ea410558eb485bcb673fd01c4d1f47b86ad292160b256555b01cce81",
"reason": "IndexationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "HelmRepository", "name": "podinfo",
- "namespace": "flux-system",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:24+00:00",
- "message": "Fetched revision: fddc2924c28a1a1895e215a4dc065f33a0ea2e8e",
+ "lastTransitionTime": "2022-04-06T17:33:33+00:00",
+ "message": "Fetched revision: 421665ba04fab9b275b9830947417b2cebf67764eee46d568c94cf2a95a6341d",
"reason": "IndexationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "nginx-nginx",
- "namespace": "flux-system"
+ "name": "cluster-config-nginx",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "nginx",
- "namespace": "nginx",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T23:00:10+00:00",
+ "lastTransitionTime": "2022-04-06T17:34:13+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T23:00:10+00:00",
+ "lastTransitionTime": "2022-04-06T17:34:13+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True",
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "redis-redis",
- "namespace": "flux-system"
+ "name": "cluster-config-redis",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "redis",
- "namespace": "redis",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:56+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:57+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T22:59:56+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:57+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True", "type": "Released" } ]
+ },
+ {
+ "appliedBy": {
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
+ },
+ "complianceState": "Compliant",
+ "helmReleaseProperties": null,
+ "kind": "HelmChart",
+ "name": "test-chart",
+ "namespace": "cluster-config",
+ "statusConditions": [
+ {
+ "lastTransitionTime": "2022-04-06T17:33:40+00:00",
+ "message": "Pulled 'redis' chart with version '11.3.4'.",
+ "reason": "ChartPullSucceeded",
+ "status": "True",
+ "type": "Ready"
+ }
+ ]
} ], "suspend": false, "systemData": {
- "createdAt": "2021-11-23T22:58:53.736245+00:00",
+ "createdAt": "2022-04-06T17:32:44.646629+00:00",
"createdBy": null, "createdByType": null,
- "lastModifiedAt": "2021-11-23T22:58:53.736245+00:00",
+ "lastModifiedAt": "2022-04-06T17:32:44.646629+00:00",
"lastModifiedBy": null, "lastModifiedByType": null },
Command group 'k8s-configuration flux' is in preview and under development. Refe
These namespaces were created: * `flux-system`: Holds the Flux extension controllers.
-* `gitops-demo`: Holds the Flux configuration objects.
+* `cluster-config`: Holds the Flux configuration objects.
* `nginx`, `podinfo`, `redis`: Namespaces for workloads described in manifests in the Git repository. ```console kubectl get namespaces
-NAME STATUS AGE
-azure-arc Active 17d
-default Active 17d
-flux-system Active 18m
-gitops-demo Active 17m
-kube-node-lease Active 17d
-kube-public Active 17d
-kube-system Active 17d
-nginx Active 17m
-podinfo Active 16m
-redis Active 17m
``` The `flux-system` namespace contains the Flux extension objects:
notification-controller-7d45678bc-fvlvr 1/1 Running 0 21m
source-controller-df7dc97cd-4drh2 1/1 Running 0 21m ```
-The namespace `gitops-demo` has the Flux configuration objects.
+The namespace `cluster-config` has the Flux configuration objects.
```console kubectl get crds NAME CREATED AT
-alerts.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-arccertificates.clusterconfig.azure.com 2021-11-06T15:12:36Z
-azureclusteridentityrequests.clusterconfig.azure.com 2021-11-06T15:12:36Z
-connectedclusters.arc.azure.com 2021-11-06T15:12:36Z
-customlocationsettings.clusterconfig.azure.com 2021-11-06T15:12:36Z
-extensionconfigs.clusterconfig.azure.com 2021-11-06T15:12:36Z
-fluxconfigs.clusterconfig.azure.com 2021-11-23T22:57:49Z
-gitconfigs.clusterconfig.azure.com 2021-11-06T15:12:36Z
-gitrepositories.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-healthstates.azmon.container.insights 2021-11-06T14:45:55Z
-helmcharts.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-helmreleases.helm.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-helmrepositories.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-kustomizations.kustomize.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-providers.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-receivers.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
+alerts.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+arccertificates.clusterconfig.azure.com 2022-03-28T21:45:19Z
+azureclusteridentityrequests.clusterconfig.azure.com 2022-03-28T21:45:19Z
+azureextensionidentities.clusterconfig.azure.com 2022-03-28T21:45:19Z
+buckets.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+connectedclusters.arc.azure.com 2022-03-28T21:45:19Z
+customlocationsettings.clusterconfig.azure.com 2022-03-28T21:45:19Z
+extensionconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z
+fluxconfigs.clusterconfig.azure.com 2022-04-06T17:15:48Z
+gitconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z
+gitrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmcharts.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmreleases.helm.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imagepolicies.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imagerepositories.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imageupdateautomations.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+kustomizations.kustomize.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+providers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+receivers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+volumesnapshotclasses.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+volumesnapshotcontents.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+volumesnapshots.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+websites.extensions.example.com 2022-03-30T23:42:32Z
``` ```console kubectl get fluxconfigs -A
-NAMESPACE NAME SCOPE URL PROVISION AGE
-gitops-demo gitops-demo cluster https://github.com/fluxcd/flux2-kustomize-helm-example Succeeded 22m
+NAMESPACE NAME SCOPE URL PROVISION AGE
+cluster-config cluster-config cluster https://github.com/Azure/gitops-flux2-kustomize-helm-mt Succeeded 44m
``` ```console kubectl get gitrepositories -A
-NAMESPACE NAME URL READY STATUS AGE
-gitops-demo gitops-demo https://github.com/fluxcd/flux2-kustomize-helm-example True Fetched revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 22m
+NAMESPACE NAME URL READY STATUS AGE
+cluster-config cluster-config https://github.com/Azure/gitops-flux2-kustomize-helm-mt True Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 45m
``` ```console kubectl get helmreleases -A
-NAMESPACE NAME READY STATUS AGE
-nginx nginx True Release reconciliation succeeded 6d4h
-podinfo podinfo True Release reconciliation succeeded 6d4h
-redis redis True Release reconciliation succeeded 6d4h
+NAMESPACE NAME READY STATUS AGE
+cluster-config nginx True Release reconciliation succeeded 66m
+cluster-config podinfo True Release reconciliation succeeded 66m
+cluster-config redis True Release reconciliation succeeded 66m
``` ```console kubectl get kustomizations -A
-NAMESPACE NAME READY STATUS AGE
-gitops-demo gitops-demo-apps True Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 23m
-gitops-demo gitops-demo-infra True Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 23m
+
+NAMESPACE NAME READY STATUS AGE
+cluster-config cluster-config-apps True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m
+cluster-config cluster-config-infra True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m
``` Workloads are deployed from manifests in the Git repository.
Workloads are deployed from manifests in the Git repository.
kubectl get deploy -n nginx NAME READY UP-TO-DATE AVAILABLE AGE
-nginx-ingress-controller 1/1 1 1 25m
-nginx-ingress-controller-default-backend 1/1 1 1 25m
+nginx-ingress-controller 1/1 1 1 67m
+nginx-ingress-controller-default-backend 1/1 1 1 67m
kubectl get deploy -n podinfo NAME READY UP-TO-DATE AVAILABLE AGE
-podinfo 1/1 1 1 26m
+podinfo 1/1 1 1 68m
kubectl get all -n redis NAME READY STATUS RESTARTS AGE
-pod/redis-master-0 1/1 Running 0 95m
+pod/redis-master-0 1/1 Running 0 68m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/redis-headless ClusterIP None <none> 6379/TCP 95m
-service/redis-master ClusterIP 10.0.180.63 <none> 6379/TCP 95m
+service/redis-headless ClusterIP None <none> 6379/TCP 68m
+service/redis-master ClusterIP 10.0.13.182 <none> 6379/TCP 68m
NAME READY AGE
-statefulset.apps/redis-master 1/1 95m
+statefulset.apps/redis-master 1/1 68m
``` ### Delete the Flux configuration
statefulset.apps/redis-master 1/1 95m
You can delete the Flux configuration by using the following command. This action deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed. ```console
-az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters --yes
+az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters --yes
```
+For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
+ Note that this action does *not* remove the Flux extension. ### Delete the Flux cluster extension
az k8s-configuration flux -h
Group az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations.
- This command group is in preview and under development. Reference and support levels:
- https://aka.ms/CLI_refstatus
+ Subgroups: deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes configurations.
Subgroups:
configurations. Commands:
- create : Create a Flux v2 Kubernetes configuration.
- delete : Delete a Flux v2 Kubernetes configuration.
- list : List all Flux v2 Kubernetes configurations.
- show : Show a Flux v2 Kubernetes configuration.
- update : Update a Flux v2 Kubernetes configuration.
+ create : Create a Flux v2 Kubernetes configuration.
+ delete : Delete a Flux v2 Kubernetes configuration.
+ list : List all Flux v2 Kubernetes configurations.
+ show : Show a Flux v2 Kubernetes configuration.
+ update : Update a Flux v2 Kubernetes configuration.
``` Here are the parameters for the `k8s-configuration flux create` CLI command:
This command is from the following extension: k8s-configuration
Command az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration.
- Command group 'k8s-configuration flux' is in preview and under development. Reference
- and support levels: https://aka.ms/CLI_refstatus
+ Arguments --cluster-name -c [Required] : Name of the Kubernetes cluster. --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
Arguments
--timeout : Maximum time to reconcile the source before timing out. Auth Arguments
- --local-auth-ref --local-ref : Local reference to a Kubernetes secret in the configuration
+ --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
namespace to use for communication to the source. Bucket Auth Arguments
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Azure Functions integrates with Application Insights to better enable you to monitor your function apps. Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service that collects data generated by your function app, including information your app writes to logs. Application Insights integration is typically enabled when your function app is created. If your app doesn't have the instrumentation key set, you must first [enable Application Insights integration](#enable-application-insights-integration).
-You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see [Manage usage and costs for Application Insights](../azure-monitor/app/pricing.md). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
+You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. For a function app, logging is configured in the [host.json] file.
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
As Application Insights instrumentation is built into Azure Functions, you need
You can try out Application Insights integration with Azure Functions for free featuring a daily limit to how much data is processed for free.
-If you enable Applications Insights during development, you might hit this limit during testing. Azure provides portal and email notifications when you're approaching your daily limit. If you miss those alerts and hit the limit, new logs won't appear in Application Insights queries. Be aware of the limit to avoid unnecessary troubleshooting time. For more information, see [Manage pricing and data volume in Application Insights](../azure-monitor/app/pricing.md).
+If you enable Applications Insights during development, you might hit this limit during testing. Azure provides portal and email notifications when you're approaching your daily limit. If you miss those alerts and hit the limit, new logs won't appear in Application Insights queries. Be aware of the limit to avoid unnecessary troubleshooting time. For more information, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
> [!IMPORTANT] > Application Insights has a [sampling](../azure-monitor/app/sampling.md) feature that can protect you from producing too much telemetry data on completed executions at times of peak load. Sampling is enabled by default. If you appear to be missing data, you might need to adjust the sampling settings to fit your particular monitoring scenario. To learn more, see [Configure sampling](configure-monitoring.md#configure-sampling).
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md
+ recommendations: false Previously updated : 02/28/2022 Last updated : 04/06/2022 # Azure support for export controls
-**Disclaimer:** Customers are wholly responsible for ensuring their own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and customers should consult their legal advisors for any questions regarding regulatory compliance.
- To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls.
+> [!NOTE]
+> **Disclaimer:** You're wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article doesn't constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
+ ## Overview of export control laws Export related definitions vary somewhat among various export control regulations. In simplified terms, an export often implies a transfer of restricted information, materials, equipment, software, and so on, to a foreign person or foreign destination by any means. US export control policy is enforced through export control laws and regulations administered primarily by the Department of Commerce, Department of State, Department of Energy, Nuclear Regulatory Commission, and Department of Treasury. Respective agencies within each department are responsible for specific areas of export control based on their historical administration, as shown in Table 1.
Export related definitions vary somewhat among various export control regulation
|Regulator|Law/Regulation|Reference| ||||
-|**Department of Commerce: </br> Bureau of Industry and Security (BIS)**|- Export Administration Act (EAA) of 1979 </br>- Export Administration Regulations (EAR)|- [P.L. 96-72](https://www.govinfo.gov/link/statute/93/503) </br>- [15 CFR Parts 730 ΓÇô 774](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title15/15cfrv2_02.tpl)|
-|**Department of State: </br> Directorate of Defense Trade Controls (DDTC)**|- Arms Export Control Act (AECA) </br>- International Traffic in Arms Regulations (ITAR)|- [22 U.S.C. 39](https://uscode.house.gov/view.xhtml?path=/prelim@title22/chapter39&edition=prelim) </br>- [22 CFR Parts 120 ΓÇô 130](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title22/22cfr120_main_02.tpl)|
-|**Department of Energy: </br> National Nuclear Security Administration (NNSA)**|- Atomic Energy Act of 1954 (AEA) </br>- Assistance to Foreign Atomic Energy Activities|- [42 U.S.C. 2011 et. seq.](https://www.govinfo.gov/content/pkg/USCODE-2010-title42/html/USCODE-2010-title42-chap23-divsnA.htm) </br>- [10 CFR Part 810](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title10/10cfr810_main_02.tpl)|
-|**Nuclear Regulatory Commission (NRC)**|- Nuclear Non-Proliferation Act of 1978 </br>- Export and Import of Nuclear Equipment and Materials|- [P.L. 95-242](https://www.govinfo.gov/content/pkg/STATUTE-92/pdf/STATUTE-92-Pg120.pdf) </br>- [10 CFR Part 110](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title10/10cfr110_main_02.tpl)|
-|**Department of Treasury: </br> Office of Foreign Assets Control (OFAC)**|- Trading with the Enemy Act (TWEA) </br>- Foreign Assets Control Regulations|- [50 U.S.C. Sections 5 and 16](https://www.govinfo.gov/content/pkg/USCODE-2009-title50/pdf/USCODE-2009-title50-app-tradingwi.pdf) </br>- [31 CFR Part 500](http://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title31/31cfrv3_02.tpl)|
+|**Department of Commerce: </br> Bureau of Industry and Security (BIS)**|- Export Administration Act (EAA) of 1979 </br>- Export Administration Regulations (EAR)|- [P.L. 96-72](https://www.govinfo.gov/content/pkg/STATUTE-93/pdf/STATUTE-93-Pg503.pdf#page=1) </br>- [15 CFR Parts 730 ΓÇô 774](https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C)|
+|**Department of State: </br> Directorate of Defense Trade Controls (DDTC)**|- Arms Export Control Act (AECA) </br>- International Traffic in Arms Regulations (ITAR)|- [22 U.S.C. 39](https://uscode.house.gov/view.xhtml?path=/prelim@title22/chapter39&edition=prelim) </br>- [22 CFR Parts 120 ΓÇô 130](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M)|
+|**Department of Energy: </br> National Nuclear Security Administration (NNSA)**|- Atomic Energy Act of 1954 (AEA) </br>- Assistance to Foreign Atomic Energy Activities|- [42 U.S.C. 2011 et. seq.](https://www.govinfo.gov/content/pkg/USCODE-2010-title42/html/USCODE-2010-title42-chap23-divsnA.htm) </br>- [10 CFR Part 810](https://www.ecfr.gov/current/title-10/chapter-III/part-810?toc=1)|
+|**Nuclear Regulatory Commission (NRC)**|- Nuclear Non-Proliferation Act of 1978 </br>- Export and Import of Nuclear Equipment and Materials|- [P.L. 95-242](https://www.govinfo.gov/content/pkg/STATUTE-92/pdf/STATUTE-92-Pg120.pdf) </br>- [10 CFR Part 110](https://www.ecfr.gov/current/title-10/chapter-I/part-110?toc=1)|
+|**Department of Treasury: </br> Office of Foreign Assets Control (OFAC)**|- Trading with the Enemy Act (TWEA) </br>- Foreign Assets Control Regulations|- [50 U.S.C. Sections 5 and 16](https://www.govinfo.gov/content/pkg/USCODE-2009-title50/pdf/USCODE-2009-title50-app-tradingwi.pdf) </br>- [31 CFR Part 500](https://www.ecfr.gov/current/title-31/subtitle-B/chapter-V)|
This article contains a review of the current US export control regulations, considerations for cloud computing, and Azure features and commitments in support of export control requirements. ## EAR
-The US Department of Commerce is responsible for enforcing the [Export Administration Regulations](https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear) (EAR) through the [Bureau of Industry and Security](https://www.bis.doc.gov/) (BIS). According to BIS [definitions](https://www.bis.doc.gov/index.php/documents/regulation-docs/412-part-734-scope-of-the-export-administration-regulations/file), export is the transfer of protected technology or information to a foreign destination or release of protected technology or information to a foreign person in the United States (also known as deemed export). Items subject to the EAR can be found on the [Commerce Control List](https://www.bis.doc.gov/index.php/regulations/commerce-control-list-ccl) (CCL), and each item has a unique [Export Control Classification Number](https://www.bis.doc.gov/index.php/licensing/commerce-control-list-classification/export-control-classification-number-eccn) (ECCN) assigned. Items not listed on the CCL are designated as EAR99 and most EAR99 commercial products do not require a license to be exported. However, depending on the destination, end user, or end use of the item, even an EAR99 item may require a BIS export license.
+The US Department of Commerce is responsible for enforcing the [Export Administration Regulations](https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear) (EAR) through the [Bureau of Industry and Security](https://www.bis.doc.gov/) (BIS). According to BIS [definitions](https://www.bis.doc.gov/index.php/documents/regulation-docs/412-part-734-scope-of-the-export-administration-regulations/file), export is the transfer of protected technology or information to a foreign destination or release of protected technology or information to a foreign person in the United States, also known as deemed export. Items subject to the EAR can be found on the [Commerce Control List](https://www.bis.doc.gov/index.php/regulations/commerce-control-list-ccl) (CCL), and each item has a unique [Export Control Classification Number](https://www.bis.doc.gov/index.php/licensing/commerce-control-list-classification/export-control-classification-number-eccn) (ECCN) assigned. Items not listed on the CCL are designated as EAR99, and most EAR99 commercial products don't require a license to be exported. However, depending on the destination, end user, or end use of the item, even an EAR99 item may require a BIS export license.
+
+The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) aren't exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements wouldn't apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country, that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-740?toc=1) of the EAR, or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the *exporter* who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
+Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation.
-Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
+Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
-You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
+You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for EAR, see [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear).
+Azure Government provides an extra layer of protection through contractual commitments regarding storage of your customer data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for EAR, see [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear).
## ITAR
-The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/cgi-bin/text-idx?SID=8870638858a2595a32dedceb661c482c&mc=true&tpl=/ecfrbrowse/Title22/22CIsubchapM.tpl) (ITAR) managed by the [Directorate of Defense Trade Controls](http://www.pmddtc.state.gov/) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/current/title-22/part-121) (USML). Customers who are manufacturers, exporters, and brokers of defense articles, services, and related technologies as defined on the USML must be registered with DDTC, must understand and abide by ITAR, and must self-certify that they operate in accordance with ITAR.
+The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M) (ITAR) managed by the [Directorate of Defense Trade Controls](https://www.pmddtc.state.gov/ddtc_public?id=ddtc_public_portal_itar_landing) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/current/title-22/part-121) (USML). If you're a manufacturer, exporter, and broker of defense articles, services, and related technologies as defined on the USML, you must be registered with DDTC, must understand and abide by ITAR, and must self-certify that you operate in accordance with ITAR.
-DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
+DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the US Department of Commerce adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that don't constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M/part-126?toc=1) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet isn't deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party.
-There is no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
+There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
-You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
+You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for ITAR, see [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar).
+Azure Government provides an extra layer of protection through contractual commitments regarding storage of your customer data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for ITAR, see [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar).
## DoE 10 CFR Part 810
-The US Department of Energy (DoE) export control regulation [10 CFR Part 810](http://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It is administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/national-nuclear-security-administration) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will not be used for non-peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
+The US Department of Energy (DoE) export control regulation [10 CFR Part 810](https://www.ecfr.gov/current/title-10/chapter-III/part-810?toc=1) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It's administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/10-cfr-part-810) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will be used only for peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
-**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it is designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you are deploying data to Azure Government, you are responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA. For more information about Azure support for DoE 10 CFR Part 810, see [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810).
+**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it's designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you're deploying data to Azure Government, you're responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA. For more information about Azure support for DoE 10 CFR Part 810, see [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810).
## NRC 10 CFR Part 110
-The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible for the [Export and Import of Nuclear Equipment and Materials](https://www.nrc.gov/about-nrc/ip/export-import.html) under the [10 CFR Part 110](https://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) export control regulations. The NRC regulates the export and import of nuclear facilities and related equipment and materials. The NRC does not regulate nuclear technology and assistance related to these items, which are under the DoE jurisdiction. Therefore, the **NRC 10 CFR Part 110 regulations would not be applicable** to Azure or Azure Government.
+The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible for the [Export and import of nuclear equipment and materials](https://www.nrc.gov/about-nrc/ip/export-import.html) under the [10 CFR Part 110](https://www.ecfr.gov/current/title-10/chapter-I/part-110?toc=1) export control regulations. The NRC regulates the export and import of nuclear facilities and related equipment and materials. The NRC doesn't regulate nuclear technology and assistance related to these items, which are under the DoE jurisdiction. Therefore, the **NRC 10 CFR Part 110 regulations wouldn't be applicable** to Azure or Azure Government.
## OFAC Sanctions Laws
-The [Office of Foreign Assets Control](https://www.treasury.gov/about/organizational-structure/offices/Pages/Office-of-Foreign-Assets-Control.aspx) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
+The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/office-of-foreign-assets-control-sanctions-programs-and-information) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
-The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempted by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies for example that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
+The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempt by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies, for example, that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (like the guidance provided by BIS for the EAR) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications (including web sites) deployed on Azure. Microsoft does not block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with your applications deployed on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR, that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
-OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions are not intended to prevent a resident of a proscribed country from viewing a public web site.
+OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions aren't intended to prevent a resident of a proscribed country from viewing a public web site.
## Managing export control requirements
-You should assess carefully how your use of Azure may implicate US export controls and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks:
+You should assess carefully how your use of Azure may implicate US export controls, and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks:
-- **Ability to control data location** - You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.-- **End-to-end encryption** - Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party. Azure relies on FIPS 140 validated cryptographic modules in the underlying operating system, and provides you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control ([customer-managed keys](../security/fundamentals/encryption-models.md), CMK). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.-- **Control over access to data** - You can know and control who can access your data and on what terms. Microsoft technical support personnel do not need and do not have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.-- **Tools and protocols to prevent unauthorized deemed export/re-export** - Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export (or deemed re-export), because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who cannot read or understand the data while it is encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
+- **Ability to control data location** ΓÇô You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data isn't *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.
+- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.
+- **Control over access to data** ΓÇô You can know and control who can access your data and on what terms. Microsoft technical support personnel don't need and don't have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.
+- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
## Location of customer data
-Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and will not be moved to another region outside the United States.
+Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and won't be moved to another region outside the United States.
## Data encryption
Data encryption provides isolation assurances that are tied directly to encrypti
### FIPS 140 validated cryptography
-The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/2/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The current version of the standard, FIPS 140-2, has security requirements covering 11 areas related to the design and implementation of a cryptographic module. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
+The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The current version of the standard, FIPS 140-3, has security requirements covering 11 areas related to the design and implementation of a cryptographic module. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs).
Azure provides many options for [encrypting data in transit](../security/fundame
Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It's secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data and can view it and those users who manage the data but should have no access.
## Restrictions on insider access
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
description: This article describes the different management tasks that you will
Previously updated : 06/14/2019 Last updated : 04/06/2022
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
Title: Collect Syslog data sources with Log Analytics agent in Azure Monitor description: Syslog is an event logging protocol that is common to Linux. This article describes how to configure collection of Syslog messages in Log Analytics and details of the records they create. -- Previously updated : 02/26/2021 Last updated : 04/06/2022
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
Title: Collect Windows event log data sources with Log Analytics agent in Azure Monitor description: Describes how to configure the collection of Windows Event logs by Azure Monitor and details of the records they create. -- Previously updated : 02/26/2021 Last updated : 04/06/2022
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Title: Azure Diagnostics extension overview description: Use Azure diagnostics for debugging, measuring performance, monitoring, traffic analysis in cloud services, virtual machines and service fabric -- Previously updated : 02/14/2020 Last updated : 04/06/2022
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
Title: Connect computers by using the Log Analytics gateway | Microsoft Docs description: Connect your devices and Operations Manager-monitored computers by using the Log Analytics gateway to send data to the Azure Automation and Log Analytics service when they do not have internet access. -- Previously updated : 12/24/2019 Last updated : 04/06/2022
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
There are many ways to explore Application Insights telemetry. For more informat
## Next steps -- [Manage usage and costs for Application Insights](pricing.md#manage-usage-and-costs-for-application-insights) - [Instrument your web pages](./javascript.md) for page view, AJAX, and other client-side telemetry. - [Analyze mobile app usage](../app/mobile-center-quickstart.md) by integrating with Visual Studio App Center. - [Monitor availability with URL ping tests](./monitor-web-app-availability.md) to your website from Application Insights servers.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
*In Application Insights, I only see a fraction of the events that are being generated by my app.* * If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the **Overview** in the portal on the left) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
-* It's possible that you're hitting a [data rate limit](../../azure-monitor/app/pricing.md#limits-summary) for your pricing plan. These limits are applied per minute.
+* It's possible that you're hitting a [data rate limit](../service-limits.md#application-insights) for your pricing plan. These limits are applied per minute.
*I'm randomly experiencing data loss.*
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
This article will cover how to create an Azure Function with TrackAvailability()
> [!NOTE] > This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating a basic availability HTTP GET test.
+> To follow these instructions, you must use the [dedicated plan](https://docs.microsoft.com/azure/azure-functions/dedicated-plan) to allow editing code in App Service Editor.
## Create a timer trigger function
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights allows you to take advantage of all the lat
- Encryption-at-rest policy - Lifetime management policy - Network access for all data associated with Application Insights Profiler and Snapshot Debugger
-* [Commitment Tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
+* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
* Faster data ingestion via Log Analytics streaming ingestion. ## Migration process When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
-Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/manage-cost-storage.md#change-the-data-retention-period) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/manage-cost-storage.md#retention-by-data-type).
+Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed. <!-- This note duplicates information in pricing.md. Understanding workspace-based usage and costs has been added as a migration prerequisite.
Once the migration is complete, you can use [diagnostic settings](../essentials/
> - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period, you may need to adjust your workspace retention settings. > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. -- Understand [Workspace-based Application Insights](pricing.md#workspace-based-application-insights) usage and costs.
+- Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
## Migrate your resource
Once your resource is migrated, you'll see the corresponding workspace info in t
Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. > [!NOTE]
-> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs instead of the cap in Application Insights.
+> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
## Understanding log queries
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Workspace-based resources support full integration between Application Insights
This also allows for common Azure role-based access control (Azure RBAC) across your resources, and eliminates the need for cross-app/workspace queries. > [!NOTE]
-> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more]( ./pricing.md#workspace-based-application-insights) about billing for workspace-based Application Insights resources.
+> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more](../logs/cost-logs.md) about billing for workspace-based Application Insights resources.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Workspace-based Application Insights allows you to take advantage of the latest
* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. * [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
-* [Commitment Tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 30% compared to the Pay-As-You-Go price.
+* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price.
* Faster data ingestion via Log Analytics streaming ingestion. ## Create workspace-based resource
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
For web pages, open your browser's debugging window.
This would be possible by writing a [telemetry processor plugin](./api-filtering-sampling.md). ## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](./pricing.md#change-the-data-retention-period) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
Data kept longer than 90 days will incur addition charges. Learn more about Application Insights pricing on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
## But what about...? * [Privacy and storage](./data-retention-privacy.md) - Your telemetry is kept on Azure secure servers. * Performance - the impact is very low. Telemetry is batched.
-* [Pricing](./pricing.md) - You can get started for free, and that continues while you're in low volume.
+* [Pricing](../logs/cost-logs.md#application-insights-billing) - You can get started for free, and that continues while you're in low volume.
## Next steps
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
In addition to the out-of-the-box telemetry sent by Application Insights SDK, yo
### <a name="limits"></a>How much data is retained?
-See the [Limits summary](./pricing.md#limits-summary).
+See the [Limits summary](../service-limits.md#application-insights).
### How can I see POST data in my server requests?
azure-monitor Legacy Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/legacy-pricing.md
+
+ Title: Application Insights legacy enterprise (per node) pricing tier
+description: Describes the legacy pricing tier for Application Insights.
+ Last updated : 02/18/2022+
+
+# Application Insights legacy enterprise (per node) pricing tier
+For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
+
+These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
+
+The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
+
+For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/).
+
+## Understanding billed usage on the legacy Enterprise (Per Node) tier
+
+As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your billed usage with the usage you observe for each Application Insights resource complicated.
+
+> [!WARNING]
+> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier.
+
+## Per Node tier and Operations Management Suite subscription entitlements
+
+Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
+
+Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
+
+> [!NOTE]
+> To ensure that you get this entitlement, your Application Insights resources must be in the Per Node pricing tier. This entitlement applies only as nodes. Application Insights resources in the Per GB tier don't realize any benefit.
+> This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane. Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+
+## How the Per Node tier works
+
+* You pay for each node that sends telemetry for any apps in the Per Node tier.
+ * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
+ * Development machines, client browsers, and mobile devices don't count as nodes.
+ * If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately.
+ * [Live Metrics Stream](../app/live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes.
+* Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
+* A data volume allocation of 200 MB per day is given for each node that's detected (with hourly granularity). Unused data allocation isn't carried over from one day to the next.
+ * If you choose the Per Node pricing tier, each subscription gets a daily allowance of data based on the number of nodes that send telemetry to the Application Insights resources in that subscription. So, if you have five nodes that send data all day, you'll have a pooled allowance of 1 GB applied to all Application Insights resources in that subscription. It doesn't matter if certain nodes send more data than other nodes because the included data is shared across all nodes. If on a given day, the Application Insights resources receive more data than is included in the daily data allocation for this subscription, the per-GB overage data charges apply.
+ * The daily data allowance is calculated as the number of hours in the day (using UTC) that each node sends telemetry divided by 24 multiplied by 200 MB. So, if you have four nodes that send telemetry during 15 of the 24 hours in the day, the included data for that day would be ((4 &#215; 15) / 24) &#215; 200 MB = 500 MB. At the price of 2.30 USD per GB for data overage, the charge would be 1.15 USD if the nodes send 1 GB of data that day.
+ * The Per Node tier daily allowance isn't shared with applications for which you have chosen the Per GB tier. Unused allowance isn't carried over from day-to-day.
+
+## Examples of how to determine distinct node count
+
+| Scenario | Total daily node count |
+|:|:-:|
+| 1 application using 3 Azure App Service instances and 1 virtual server | 4 |
+| 3 applications running on 2 VMs; the Application Insights resources for these applications are in the same subscription and in the Per Node tier | 2 |
+| 4 applications whose Applications Insights resources are in the same subscription; each application running 2 instances during 16 off-peak hours, and 4 instances during 8 peak hours | 13.33 |
+| Cloud services with 1 Worker Role and 1 Web Role, each running 2 instances | 4 |
+| A 5-node Azure Service Fabric cluster running 50 microservices; each microservice running 3 instances | 5|
+
+* The precise node counting depends on which Application Insights SDK your application is using.
+ * In SDK versions 2.2 and later, both the Application Insights [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) and the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) report each application host as a node. Examples are the computer name for physical server and VM hosts or the instance name for cloud services. The only exception is an application that uses only the [.NET Core](https://dotnet.github.io/) and the Application Insights Core SDK. In that case, only one node is reported for all hosts because the host name isn't available.
+ * For earlier versions of the SDK, the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) behaves like the newer SDK versions, but the [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) reports only one node, regardless of the number of application hosts.
+ * If your application uses the SDK to set **roleInstance** to a custom value, by default, that same value is used to determine node count.
+ * If you're using a new SDK version with an app that runs from client machines or mobile devices, the node count might return a number that's large (because of the large number of client machines or mobile devices).
+++++
+## Next steps
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
While the above sample is for a console app, the same code can be used in any .N
|**Latency**|Data displayed within one second|Aggregated over minutes| |**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)| |**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There is no charge for Live Stream data|Subject to [pricing](./pricing.md)
+|**Free**|There is no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)| |**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](pricing.md), and create other Azure resources.
+This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources.
The key to creating these resources is JSON templates for [Azure Resource Manager](../../azure-resource-manager/management/manage-resources-powershell.md). The basic procedure is: download the JSON definitions of existing resources; parameterize certain values such as names; and then run the template whenever you want to create a new resource. You can package several resources together, to create them all in one go - for example, an app monitor with availability tests, alerts, and storage for continuous export. There are some subtleties to some of the parameterizations, which we'll explain here.
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
## Custom metrics dimensions and pre-aggregation
-All metrics that you send using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. However, while the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](./pricing.md) tab by checking "Enable alerting on custom metric dimensions":
+All metrics that you send using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. However, while the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by checking "Enable alerting on custom metric dimensions":
![Usage and estimated cost](./media/pre-aggregated-metrics-log-metrics/001-cost.png)
Use [Azure Monitor Metrics Explorer](../essentials/metrics-getting-started.md) t
## Pricing models for Application Insights metrics
-Ingesting metrics into Application Insights, whether log-based or pre-aggregated, will generate costs based on the size of the ingested data, as described [here](./pricing.md#pricing-model). Your custom metrics, including all its dimensions, are always stored in the Application Insights log-store; additionally, a pre-aggregated version of your custom metrics (with no dimensions) is forwarded to the metrics store by default.
+Ingesting metrics into Application Insights, whether log-based or pre-aggregated, will generate costs based on the size of the ingested data, as described in [Azure Monitor Logs pricing details](../logs/cost-logs.md#application-insights-billing). Your custom metrics, including all its dimensions, are always stored in the Application Insights log-store; additionally, a pre-aggregated version of your custom metrics (with no dimensions) is forwarded to the metrics store by default.
Selecting the [Enable alerting on custom metric dimensions](#custom-metrics-dimensions-and-pre-aggregation) option to store all dimensions of the pre-aggregated metrics in the metric store, can generate **additional** costs based on [Custom Metrics pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
- Title: Manage usage and costs for Azure Application Insights | Microsoft Docs
-description: Manage telemetry volumes and monitor costs in Application Insights.
-- Previously updated : 02/17/2021---
-# Manage usage and costs for Application Insights
-
-This article describes how to proactively monitor and control Application Insights costs.
-
-[Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes usage and estimated costs across Azure Monitor features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill).
-
-> [!NOTE]
-> All prices and costs in this article are for example purposes only.
-
-<! App Insights monitoring features (availability, performance, usage, etc. ) Supported languages, integration with specific tools (Azure DevOps, Jira, and PagerDuty, etc.) should be documented elsewhere. (e.g. platforms.md) -->
-
-If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
-
-## Pricing model
-
-The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There's no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
-
-[Multi-step web tests](./availability-multistep.md) incur extra charges. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
-
-The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also increase costs because this can result in the creation of more pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
-
-### Workspace-based Application Insights
-
-For Application Insights resources which send their data to a Log Analytics workspace, called [workspace-based Application Insights resources](create-workspace-resource.md), the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics [pricing model](../logs/manage-cost-storage.md#pricing-model), including **Commitment Tiers** in addition to Pay-As-You-Go. Commitment Tiers offer pricing up to 30% lower than Pay-As-You-Go. Log Analytics also has more options for data retention, including [retention by data type](../logs/manage-cost-storage.md#retention-by-data-type). Application Insights data types in the workspace receive 90 days of retention without charges. Usage of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. Learn how to track data ingestion and retention costs in Log Analytics using the [Usage and estimated costs](../logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs), [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill) and [Log Analytics queries](#data-volume-for-workspace-based-application-insights-resources).
-
-## Estimating the costs to manage your application
-
-If you're not yet using Application Insights, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Application Insights. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and expand the Application Insights section. Your estimated costs depend on the amount of log data ingested. There are two approaches to estimate data volumes:
-
-1. estimate your likely data ingestion based on what other similar applications generate, or
-2. use of default monitoring and adaptive sampling, which is available in the ASP.NET SDK.
-
-### Learn from what similar applications collect
-
-In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you'll collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
-
-### Data collection when using sampling
-
-With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
-
-For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](./sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
-
-## Viewing Application Insights usage on your Azure bill
-
-The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-based resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
-
-To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
-
-> [!NOTE]
-> Application Insights billing for data ingestion and data retention is reported as coming from the **Log Analytics** service (Meter category in Azure Cost Management + Billing).
-
-Even more understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
-In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column, which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there's a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
-
-## Understand your usage and optimizing your costs
-<a name="understand-your-usage-and-estimate-costs"></a>
-
-Application Insights makes it easy to understand what your costs are likely to be based on recent usage patterns. To get started, in the Azure portal, for the Application Insights resource, go to the **Usage and estimated costs** page:
-
-![Choose pricing](./media/pricing/pricing-001.png)
-
-A. Review your data volume for the month. This includes all the data that's received and retained (after any [sampling](./sampling.md)) from your server and client apps, and from availability tests.
-B. A separate charge is made for [Multi-step web tests](./availability-multistep.md). (This doesn't include simple availability tests, which are included in the data volume charge.)
-C. View data volume trends for the past month.
-D. Enable data ingestion [sampling](./sampling.md).
-E. Set the daily data volume cap.
-
-(All prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
-
-To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named "Data point volume", and then select the *Apply splitting* option to split the data by "Telemetry item type".
-
-Application Insights charges are added to your Azure bill. You can see details of your Azure bill in the **Cost Management + Billing** section of the Azure portal, or in the [Azure billing portal](https://account.windowsazure.com/Subscriptions). [See below](#viewing-application-insights-usage-on-your-azure-bill) for details on using this for Application Insights.
-
-![In the left menu, select Billing](./media/pricing/02-billing.png)
-
-### Using data volume metrics
-<a id="understanding-ingested-data-volume"></a>
-
-To learn more about your data volumes, selecting **Metrics** for your Application Insights resource, add a new chart. For the chart metric, under **Log-based metrics**, select **Data point volume**. Click **Apply splitting**, and select group by **`Telemetryitem` type**.
-
-![Use Metrics to look at data volume](./media/pricing/10-billing.png)
-
-### Queries to understand data volume details
-
-There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` won't have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
-
-#### Using aggregated data volume information
-
-For instance, you can use the `systemEvents` table to see the data volume ingested in the last 24 hours with the query:
-
-```kusto
-systemEvents
-| where timestamp >= ago(24h)
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes)
-```
-
-Or to see a chart of data volume (in bytes) by data type for the last 30 days, you can use:
-
-```kusto
-systemEvents
-| where timestamp >= startofday(ago(30d))
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d) | render barchart
-```
-
-This query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
-
-To learn more about your telemetry data changes, we can get the count of events by type using the query:
-
-```kusto
-systemEvents
-| where timestamp >= startofday(ago(30d))
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| summarize count() by BillingTelemetryType, bin(timestamp, 1d)
-| render barchart
-```
-
-#### Using data size per event information
-
-To learn more details about the source of your data volumes, you can use the `_BilledSize` property that is present on each ingested event.
-
-For example, to look at which operations generate the most data volume in the last 30 days, we can sum `_BilledSize` for all dependency events:
-
-```kusto
-dependencies
-| where timestamp >= startofday(ago(30d))
-| summarize sum(_BilledSize) by operation_Name
-| render barchart
-```
-
-#### Data volume for workspace-based Application Insights resources
-
-To look at the data volume trends for all of the [workspace-based Application Insights resources](create-workspace-resource.md) in a workspace for the last week, go to the Log Analytics workspace and run the query:
-
-```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
-| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
-| summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d)
-| render areachart
-```
-
-To query the data volume trends by type for a specific workspace-based Application Insights resource, in the Log Analytics workspace use:
-
-```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
-| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
-| where _ResourceId contains "<myAppInsightsResourceName>"
-| summarize sum(_BilledSize) by Type, bin(TimeGenerated, 1d)
-| render areachart
-```
-
-## Managing your data volume
-
-The volume of data you send can be managed using the following techniques:
-
-* **Sampling**: You can use sampling to reduce the amount of telemetry that's sent from your server and client apps, with minimal distortion of metrics. Sampling is the primary tool you can use to tune the amount of data you send. Learn more about [sampling features](./sampling.md).
-
-* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
-
-* **Disable unneeded modules**: [Edit ApplicationInsights.config](./configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data are inessential.
-
-* **Pre-aggregate metrics**: If you put calls to TrackMetric in your app, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Or, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
-
-* **Daily cap**: When you create an Application Insights resource in the Azure portal, the daily cap is set to 100 GB/day. When you create an Application Insights resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production.
-
- The maximum cap in Application Insights is 1,000 GB/day unless you request a higher maximum for a high-traffic application.
-
- > [!TIP]
- > If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs instead of the cap in Application Insights.
-
- Warning emails about the daily cap are sent to account that are members of these roles for your Application Insights resource: "ServiceAdmin", "AccountAdmin", "CoAdmin", "Owner".
-
- Use care when you set the daily cap. Your intent should be to *never hit the daily cap*. If you hit the daily cap, you lose data for the remainder of the day, and you can't monitor your application. To change the daily cap, use the **Daily volume cap** option. You can access this option in the **Usage and estimated costs** pane (this is described in more detail later in the article).
-
- We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
-
-* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
--
-## Manage your maximum daily data volume
-
-You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
-
-> [!WARNING]
-> If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs. The daily cap in Application Insights may not limit ingestion in all cases to the selected level. (If your Application Insights resource is ingesting a lot of data, the Application Insights daily cap might need to be raised.)
-
-Instead of using the daily volume cap, use [sampling](./sampling.md) to tune the data volume to the level you want. Then, use the daily cap only as a "last resort" in case your application unexpectedly begins to send much higher volumes of telemetry.
-
-### Identify what daily data limit to define
-
-Review Application Insights Usage and estimated costs to understand the data ingestion trend and what is the daily volume cap to define. It should be considered with care, since you won't be able to monitor your resources after the limit is reached.
-
-### Set the Daily Cap
-
-To change the daily cap, in the **Configure** section of your Application Insights resource, in the **Usage and estimated costs** page, select **Daily Cap**.
-
-![Adjust the daily telemetry volume cap](./media/pricing/pricing-003.png)
-
-To [change the daily cap via Azure Resource Manager](./powershell.md), the property to change is the `dailyQuota`. Via Azure Resource Manager you can also set the `dailyQuotaResetTime` and the daily cap's `warningThreshold`.
-
-### Create alerts for the Daily Cap
-
-The Application Insights Daily Cap creates an event in the Azure activity log when the ingested data volumes reaches the warning level or the daily cap level. You can [create an alert based on these activity log events](../alerts/alerts-activity-log.md#azure-portal). The signal names for these events are:
-
-* Application Insights component daily cap warning threshold reached
-
-* Application Insights component daily cap reached
-
-## Sampling
-[sampling](./sampling.md) is a method of reducing the rate at which telemetry is sent to your app, while retaining the ability to find related events during diagnostic searches. You also retain correct event counts.
-
-Sampling is an effective way to reduce charges and stay within your monthly quota. The sampling algorithm retains related items of telemetry so, for example, when you use Search, you can find the request related to a particular exception. The algorithm also retains correct counts so you see the correct values in Metric Explorer for request rates, exception rates, and other counts.
-
-There are several forms of sampling.
-
-* [Adaptive sampling](./sampling.md) is the default for the ASP.NET SDK. Adaptive sampling automatically adjusts to the volume of telemetry that your app sends. It operates automatically in the SDK in your web app so that telemetry traffic on the network is reduced.
-* *Ingestion sampling* is an alternative that operates at the point where telemetry from your app enters the Application Insights service. Ingestion sampling doesn't affect the volume of telemetry sent from your app, but it reduces the volume that's retained by the service. You can use ingestion sampling to reduce the quota that's used up by telemetry from browsers and other SDKs.
-
-To set ingestion sampling, go to the **Pricing** pane:
-
-![In the Quota and pricing pane, select the Samples tile, and then select a sampling fraction](./media/pricing/pricing-004.png)
-
-> [!WARNING]
-> The **Data sampling** pane controls only the value of ingestion sampling. It doesn't reflect the sampling rate that's applied by the Application Insights SDK in your app. If the incoming telemetry has already been sampled in the SDK, ingestion sampling isn't applied.
->
-
-To discover the actual sampling rate, no matter where it's been applied, use an [Analytics query](../logs/log-query-overview.md). The query looks like this:
-
-```kusto
-requests | where timestamp > ago(1d)
-| summarize 100/avg(itemCount) by bin(timestamp, 1h)
-| render areachart
-```
-
-In each retained record, `itemCount` indicates the number of original records that it represents. It's equal to 1 + the number of previous discarded records.
-
-## Change the data retention period
-
-The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention.
-
-To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option:
-
-![Screenshot that shows where to change the data retention period.](./media/pricing/pricing-005.png)
-
-A several-day grace period begins when the retention is lowered before the oldest data is removed.
-
-The retention can also be [set programatically using PowerShell](powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
-
-## Data transfer charges using Application Insights
-
-Sending data to Application Insights might incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is very small (few %) compared to the costs for Application Insights log data ingestion. Consequently controlling costs for Log Analytics needs to focus on your ingested data volume, and we have guidance to help understand that [here](#managing-your-data-volume).
-
-## Limits summary
--
-## Disable daily cap e-mails
-
-To disable the daily volume cap e-mails, under the **Configure** section of your Application Insights resource, in the **Usage and estimated costs** pane, select **Daily Cap**. There are settings to send e-mail when the cap is reached, as well as when an adjustable warning level has been reached. If you wish to disable all daily cap volume-related emails, uncheck both boxes.
-
-## Legacy Enterprise (Per Node) pricing tier
-
-For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
-
-These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
-
-The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
-
-For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/).
-
-### Understanding billed usage on the legacy Enterprise (Per Node) tier
-
-As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resource complicated.
-
-> [!WARNING]
-> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier.
-
-### Per Node tier and Operations Management Suite subscription entitlements
-
-Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
-
-Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
-
-> [!NOTE]
-> To ensure that you get this entitlement, your Application Insights resources must be in the Per Node pricing tier. This entitlement applies only as nodes. Application Insights resources in the Per GB tier don't realize any benefit.
-> This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane. Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
-
-### How the Per Node tier works
-
-* You pay for each node that sends telemetry for any apps in the Per Node tier.
- * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
- * Development machines, client browsers, and mobile devices don't count as nodes.
- * If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately.
- * [Live Metrics Stream](./live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes.
-* Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
-* A data volume allocation of 200 MB per day is given for each node that's detected (with hourly granularity). Unused data allocation isn't carried over from one day to the next.
- * If you choose the Per Node pricing tier, each subscription gets a daily allowance of data based on the number of nodes that send telemetry to the Application Insights resources in that subscription. So, if you have five nodes that send data all day, you'll have a pooled allowance of 1 GB applied to all Application Insights resources in that subscription. It doesn't matter if certain nodes send more data than other nodes because the included data is shared across all nodes. If on a given day, the Application Insights resources receive more data than is included in the daily data allocation for this subscription, the per-GB overage data charges apply.
- * The daily data allowance is calculated as the number of hours in the day (using UTC) that each node sends telemetry divided by 24 multiplied by 200 MB. So, if you have four nodes that send telemetry during 15 of the 24 hours in the day, the included data for that day would be ((4 &#215; 15) / 24) &#215; 200 MB = 500 MB. At the price of 2.30 USD per GB for data overage, the charge would be 1.15 USD if the nodes send 1 GB of data that day.
- * The Per Node tier daily allowance isn't shared with applications for which you have chosen the Per GB tier. Unused allowance isn't carried over from day-to-day.
-
-### Examples of how to determine distinct node count
-
-| Scenario | Total daily node count |
-|:|:-:|
-| 1 application using 3 Azure App Service instances and 1 virtual server | 4 |
-| 3 applications running on 2 VMs; the Application Insights resources for these applications are in the same subscription and in the Per Node tier | 2 |
-| 4 applications whose Applications Insights resources are in the same subscription; each application running 2 instances during 16 off-peak hours, and 4 instances during 8 peak hours | 13.33 |
-| Cloud services with 1 Worker Role and 1 Web Role, each running 2 instances | 4 |
-| A 5-node Azure Service Fabric cluster running 50 microservices; each microservice running 3 instances | 5|
-
-* The precise node counting depends on which Application Insights SDK your application is using.
- * In SDK versions 2.2 and later, both the Application Insights [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) and the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) report each application host as a node. Examples are the computer name for physical server and VM hosts or the instance name for cloud services. The only exception is an application that uses only the [.NET Core](https://dotnet.github.io/) and the Application Insights Core SDK. In that case, only one node is reported for all hosts because the host name isn't available.
- * For earlier versions of the SDK, the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) behaves like the newer SDK versions, but the [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) reports only one node, regardless of the number of application hosts.
- * If your application uses the SDK to set **roleInstance** to a custom value, by default, that same value is used to determine node count.
- * If you're using a new SDK version with an app that runs from client machines or mobile devices, the node count might return a number that's large (because of the large number of client machines or mobile devices).
-
-## Automation
-
-You can write a script to set the pricing tier by using Azure Resource Management. [Learn how](powershell.md#price).
-
-## Next steps
-
-[Sampling](./sampling.md) in Application Insights is the recommended way to reduce telemetry traffic, data costs, and storage costs.
-
-[api]: app-insights-api-custom-events-metrics.md
-[apiproperties]: app-insights-api-custom-events-metrics.md#properties
-[start]: ./app-insights-overview.md
-[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
-[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
-
-## Troubleshooting
-
-### Unexpected usage or estimated cost
-
-Lower your bill with updated versions of the ASP.NET Core SDK and Worker Service SDK, which [don't collect counters by default](eventcounters.md#default-counters-collected).
-
-### Microsoft Q&A question page
-
-If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
The above code will disable adaptive sampling. Follow the steps below to add sam
Use extension methods of `TelemetryProcessorChainBuilder` as shown below to customize sampling behavior. > [!IMPORTANT]
-> If you use this method to configure sampling, please make sure to set the `aiOptions.EnableAdaptiveSampling` property to `false` when calling `AddApplicationInsightsTelemetry()`. After making this change, you then need to follow the instructions in the code block below **exactly** in order to re-enable adaptive sampling with your customizations in place. Failure to do so can result in excess data ingestion. Always test post changing sampling settings, and set an appropriate [daily data cap](pricing.md#set-the-daily-cap) to help control your costs.
+> If you use this method to configure sampling, please make sure to set the `aiOptions.EnableAdaptiveSampling` property to `false` when calling `AddApplicationInsightsTelemetry()`. After making this change, you then need to follow the instructions in the code block below **exactly** in order to re-enable adaptive sampling with your customizations in place. Failure to do so can result in excess data ingestion. Always test post changing sampling settings, and set an appropriate [daily data cap](../logs/daily-cap.md) to help control your costs.
```csharp using Microsoft.ApplicationInsights.Extensibility
In general, for most small and medium size applications you don't need sampling.
The main advantages of sampling are: * Application Insights service drops ("throttles") data points when your app sends a very high rate of telemetry in a short time interval. Sampling reduces the likelihood that your application will see throttling occur.
-* To keep within the [quota](pricing.md) of data points for your pricing tier.
+* To keep within the [quota](../logs/daily-cap.md) of data points for your pricing tier.
* To reduce network traffic from the collection of telemetry. ### Which type of sampling should I use?
azure-monitor Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/windows-desktop.md
using Microsoft.ApplicationInsights;
By default this SDK will collect and store the computer name of the system emitting telemetry.
-Computer name is used by Application Insights [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) for internal billing purposes. By default if you use a telemetry initializer to override `telemetry.Context.Cloud.RoleInstance`, a separate property `ai.internal.nodeName` will be sent which will still contain the computer name value. This value will not be stored with your Application Insights telemetry, but is used internally at ingestion to allow for backwards compatibility with the legacy node-based billing model.
+Computer name is used by Application Insights [Legacy Enterprise (Per Node) pricing tier](../logs/cost-logs.md#legacy-pricing-tiers) for internal billing purposes. By default if you use a telemetry initializer to override `telemetry.Context.Cloud.RoleInstance`, a separate property `ai.internal.nodeName` will be sent which will still contain the computer name value. This value will not be stored with your Application Insights telemetry, but is used internally at ingestion to allow for backwards compatibility with the legacy node-based billing model.
-If you are on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) and simply need to override storage of the computer name, use a telemetry Initializer:
+If you are on the Legacy Enterprise (Per Node) pricing tier and simply need to override storage of the computer name, use a telemetry Initializer:
**Write custom TelemetryInitializer as below.**
Instantiate the initializer in the `Program.cs` `Main()` method below setting th
## Override transmission of computer name
-If you aren't on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) and wish to completely prevent any telemetry containing computer name from being sent, you need to use a telemetry processor.
+If you aren't on the Legacy Enterprise (Per Node) pricing tier and wish to completely prevent any telemetry containing computer name from being sent, you need to use a telemetry processor.
### Telemetry processor
namespace WindowsFormsApp2
``` > [!NOTE]
-> While you can technically use a telemetry processor as described above even if you are on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier), this will result in the potential for over-billing due to the inability to properly distinguish nodes for per node pricing.
+> While you can technically use a telemetry processor as described above even if you are on the Legacy Enterprise (Per Node) pricing tier, this will result in the potential for over-billing due to the inability to properly distinguish nodes for per node pricing.
## Next steps * [Create a dashboard](./overview-dashboard.md)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
+
+ Title: Azure Monitor best practices - Cost management
+description: Guidance and recommendations for reducing your cost for Azure Monitor.
+++ Last updated : 03/31/2022+++
+# Azure Monitor best practices - Cost management
+This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost effective manner. This includes leveraging cost saving features and ensuring that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
++
+## Understand Azure Monitor charges
+You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges.
+
+## Configure workspaces
+You can start using Azure Monitor with a single Log Analytics workspace using default options. As your monitoring environment grows though, you will need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces, and you want to evaluate configuration options that allow you to reduce your monitoring costs.
+
+### Configure pricing tier or dedicated cluster
+By default, workspaces will use Pay-As-You-Go pricing with no minimum data volume. If you collect a sufficient amount of data though, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
+
+[Dedicated clusters](logs/logs-dedicated-clusters.md) provide additional functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach the 500 GB.
+
+See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
+
+### Optimize workspace configuration
+As your monitoring environment becomes more complex, you will need to consider whether to create additional Log Analytics workspaces. This may be as you place resources in additional regions or as you implement additional services that use workspaces such as Azure Sentinel and Microsoft Defender for Cloud.
+
+There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from . See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment.
+
+## Configure tables in each workspace
+Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You may be collecting data though that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by configuring Basic Logs and by optimizing your data retention and archiving.
+
+### Configure data retention and archiving
+Data collected in a Log Analytics workspace is retained for 31 days at no charge (90 days if Azure Sentinel is enabled on the workspace). You can retain data beyond the default for trending analysis or other reporting, but there is a charge for this retention.
+
+Your retention requirement may just be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md) which allows you to retain data long term (up to 7 years) at a significantly reduced cost. There is a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data though, this cost will be more than offset by the reduced retention cost.
+
+You can configure retention and archiving for all tables in a workspace or configure each table separately. This allows you to optimize your costs by setting only the retention you require for each data type.
+
+### Configure Basic Logs (preview)
+You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting and auditing as [Basic Logs](logs/basic-logs-configure.md). Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there is a cost for querying them. If you query these tables infrequently though, this query cost can be more than offset by the reduced ingestion cost.
+
+The decision whether to configure a table for Basic Logs is based on the following criteria:
+
+- The table currently support Basic Logs.
+- You don't require more than eight days of data retention for the table.
+- You only require basic queries of the data using a limited version of the query language.
+- The cost savings for data ingestion over a month exceeds the expected cost for any expected queries
+
+See [Query Basic Logs in Azure Monitor (Preview)](/logs/basic-logs-query.md) for details on query limitations and [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more details about them.
+
+## Reduce the amount of data collected
+The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. If you find that you're collecting data that's not being used for alerting or analysis, then you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
+
+The configuration change will vary depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
+
+## Virtual machines
+Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
++
+| Source | Strategy | Log Analytics agent | Azure Monitor agent |
+|:|:|:|:|
+| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific event IDs. |
+| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific events. |
+| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific counters. |
++
+### Use transformations to filter events
+The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still may be collecting records that provide little value. Use [transformations](essentials/data-collection-rule-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+See the section below on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
+
+### Multi-homing agents
+You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace.
+
+You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](logs/../agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine.
+
+## Application Insights
+There are multiple methods that you can use to limit the amount of data collected by Application Insights.
+
+* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
+
+* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-correlation).
+
+* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
+
+* **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
+
+* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs because this can result in the creation of more pre-aggregation metrics.
+
+* **Ensure use of updated SDKs**: Earlier version of the ASP.NET Core SDK and Worker Service SDK [collect a large number of counters by default](app/eventcounters.md#default-counters-collected) which collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
+## Resource logs
+The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You may also not want to collect platform metrics from Azure resources since this data is already being collected in Metrics. Only configured your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+
+Diagnostic settings do not allow granular filtering of resource logs. You may require certain logs in a particular category but not others. In this case, use [ingestion-time transformations](logs/ingestion-time-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
+
+## Other insights and services
+See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage. Following
+
+- **Container insights** - [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)
+- **Microsoft Sentinel** - [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)
+- **Defender for Cloud** - [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)
+++
+## Filter data with transformations (preview)
+[Data collection rule transformations in Azure Monitor](essentials/data-collection-rule-transformations.md) allow you to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
+
+Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might send a variety of records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
+
+You can also ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting, but you don't require certain columns in those records that contain a large amount of data. Create a transformation for that table that removes those columns.
+
+The following table for methods to apply transformations to different workflows.
+
+> {!NOTE]
+> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor Reference](/azure/azure-monitor-reference). Custom tables are created by custom applications and have a suffix of *_CL* ion their name.
+
+| Source | Target | Description | Filtering method |
+|:|:|:|:|
+| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in DCR to collect specific data from client machine. Ingestion-time transformations in agent DCR are not yet supported. |
+| Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |
+| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
+| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new custom logs API. |
+| Data Collector API | Custom tables | Use [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new custom logs API. |
+| Custom Logs API | Custom tables<br>Azure tables | Use [Custom Logs API](logs/custom-logs-overview.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
+| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
++
+## Monitor workspace and analyze usage
+Once you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have additional opportunities to reduce your usage, such as further filtering out collected data that has not proven to be useful.
++
+### Set a daily cap
+A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day once your configured limit is reached. This should not be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
+
+When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Rather than just relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. This allows you address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+
+See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for details on how the daily cap works and how to configure one.
+### Send alert when data collection is high
+In order to avoid unexpected bills, you should be proactively notified any time you experience excessive usage. This allows you to address any potential anomalies before the end of your billing period.
+
+The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this will result in a higher charge for the alert rule.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
+| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
+| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
+| Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Billable data volume greater than 50 GB in 24 hours |
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on using log queries like the one used here to analyze billable usage in your workspace.
+
+## Analyze your collected data
+When you detect an increase in data collection, then you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes a variety of log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
+
+## Next steps
+
+- See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
This article is part of the scenario [Recommendations for configuring Azure Moni
## Create Log Analytics workspace You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor. You can start with a single workspace to support this monitoring, but see [Designing your Azure Monitor Logs deployment](logs/design-logs-deployment.md) for guidance on when to use multiple workspaces.
-There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) for details.
+There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details.
See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces though, this is often not required since most environments will require a minimal number.
See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/report
### Collect resource logs and platform metrics Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics though, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting can be used to also send the platform metrics for most resources to the same workspace, which allows you to analyze metric data using log queries with other collected data.
-There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) for details on optimizing the cost of your log collection.
+There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection.
See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-in-azure-portal) to create a diagnostic setting for an Azure resource.
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
A core goal of your monitoring strategy will be minimizing costs. Some data coll
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)-- [Manage usage and costs for Application Insights](app/pricing.md)+ ## Define strategy Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to leverage the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus-
## Next steps
-For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Manage your usage and estimate costs](../logs/manage-cost-storage.md).
+For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
To alert on what matters, Container insights includes the following metric alert
|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.| |**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. | |Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
-| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) |
+| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. | |**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.| |Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
The output will show results similar to the following:
![Log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
-Further information on how to monitor data usage and analyze cost is available in [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md).
+Further information on how to analyze usage is available in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
## Next steps
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
For details on when billing will be enabled for custom metrics and metrics queri
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../app/pricing.md#pricing-model) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
+> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../usage-estimated-costs.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
## How to send custom metrics
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 02/08/2022 Last updated : 03/03/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table. - ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BackendDuration|Yes|Duration of Backend Requests|Milliseconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
-|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service|Location|
-|Duration|Yes|Overall Duration of Gateway Requests|Milliseconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
+|BackendDuration|Yes|Duration of Backend Requests|MilliSeconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
+|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service. Note: For skus other than Premium, 'Max' aggregation will show the value as 0.|Location|
+|ConnectionAttempts|Yes|WebSocket Connection Attempts (Preview)|Count|Total|Count of WebSocket connection attempts based on selected source and destination|Location, Source, Destination, State|
+|Duration|Yes|Overall Duration of Gateway Requests|MilliSeconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
|EventHubDroppedEvents|Yes|Dropped EventHub Events|Count|Total|Number of events skipped because of queue size limit reached|Location| |EventHubRejectedEvents|Yes|Rejected EventHub Events|Count|Total|Number of rejected EventHub events (wrong configuration or unauthorized)|Location| |EventHubSuccessfulEvents|Yes|Successful EventHub Events|Count|Total|Number of successful EventHub events|Location|
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessfulRequests|Yes|Successful Gateway Requests (Deprecated)|Count|Total|Number of successful gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |TotalRequests|Yes|Total Gateway Requests (Deprecated)|Count|Total|Number of gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |UnauthorizedRequests|Yes|Unauthorized Gateway Requests (Deprecated)|Count|Total|Number of unauthorized gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname|
+|WebSocketMessages|Yes|WebSocket Messages (Preview)|Count|Total|Count of WebSocket messages based on selected source and destination|Location, Source, Destination|
## Microsoft.AppConfiguration/configurationStores
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestHandled|Yes|Handled Requests|Count|Total|Handled Requests|Node| |StorageUsage|Yes|Storage Usage|Bytes|Average|Storage Usage|Node|
-## Microsoft.BotService/botServices
+## microsoft.botservice/botservices
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|RequestLatency|Yes|Requests Latencies|Milliseconds|Average|How long it takes to get request response|Operation, Authentication, Protocol, ResourceId, Region|
-|RequestsTraffic|Yes|Requests Traffic|Count|Average|Number of requests within a given period of time|Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText|
+|RequestLatency|Yes|Request Latency|Milliseconds|Total|Time taken by the server to process the request|Operation, Authentication, Protocol, DataCenter|
+|RequestsTraffic|Yes|Requests Traffic|Percent|Count|Number of Requests Made|Operation, Authentication, Protocol, StatusCode, StatusCodeClass, DataCenter|
+ ## Microsoft.Cache/redis
This latest update adds a new column and reorders the metrics to be alphabetical
|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allConnectionsClosedPerSecond|Yes|Connections Closed Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
+|allConnectionsCreatedPerSecond|Yes|Connections Created Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allgetcommands|Yes|Gets (Instance Based)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
This latest update adds a new column and reorders the metrics to be alphabetical
|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| + ## Microsoft.Cache/redisEnterprise |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|cachemisses|Yes|Cache Misses|Count|Total||InstanceId| |cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||InstanceId| |cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||InstanceId|
-|CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region|
-|CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region|
|connectedclients|Yes|Connected Clients|Count|Maximum||InstanceId| |errors|Yes|Errors|Count|Maximum||InstanceId, ErrorType| |evictedkeys|Yes|Evicted Keys|Count|Total||No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions| |Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication| - ## Microsoft.Cloudtest/hostedpools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRead|No|TotalRead|BytesPerSecond|Average|The total lustre file system read per second|filesystem_name, category, system| |TotalWrite|No|TotalWrite|BytesPerSecond|Average|The total lustre file system write per second|filesystem_name, category, system| - ## Microsoft.CognitiveServices/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AudioSecondsTranscribed|Yes|Audio Seconds Transcribed|Count|Total|Number of seconds transcribed|ApiName, FeatureName, UsageChannel, Region|
+|AudioSecondsTranslated|Yes|Audio Seconds Translated|Count|Total|Number of seconds translated|ApiName, FeatureName, UsageChannel, Region|
|BlockedCalls|Yes|Blocked Calls|Count|Total|Number of calls that exceeded rate or quota limit.|ApiName, OperationName, Region| |CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region| |CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region| |ClientErrors|Yes|Client Errors|Count|Total|Number of calls with client side error (HTTP response code 4xx).|ApiName, OperationName, Region|
+|ComputerVisionTransactions|Yes|Computer Vision Transactions|Count|Total|Number of Computer Vision Transactions|ApiName, FeatureName, UsageChannel, Region|
+|CustomVisionTrainingTime|Yes|Custom Vision Training Time|Seconds|Total|Custom Vision training time|ApiName, FeatureName, UsageChannel, Region|
+|CustomVisionTransactions|Yes|Custom Vision Transactions|Count|Total|Number of Custom Vision prediction transactions|ApiName, FeatureName, UsageChannel, Region|
|DataIn|Yes|Data In|Bytes|Total|Size of incoming data in bytes.|ApiName, OperationName, Region| |DataOut|Yes|Data Out|Bytes|Total|Size of outgoing data in bytes.|ApiName, OperationName, Region|
+|DocumentCharactersTranslated|Yes|Document Characters Translated|Count|Total|Number of characters in document translation request.|ApiName, FeatureName, UsageChannel, Region|
+|DocumentCustomCharactersTranslated|Yes|Document Custom Characters Translated|Count|Total|Number of characters in custom document translation request.|ApiName, FeatureName, UsageChannel, Region|
+|FaceImagesTrained|Yes|Face Images Trained|Count|Total|Number of images trained. 1,000 images trained per transaction.|ApiName, FeatureName, UsageChannel, Region|
+|FacesStored|Yes|Faces Stored|Count|Total|Number of faces stored, prorated daily. The number of faces stored is reported daily.|ApiName, FeatureName, UsageChannel, Region|
+|FaceTransactions|Yes|Face Transactions|Count|Total|Number of API calls made to Face service|ApiName, FeatureName, UsageChannel, Region|
+|ImagesStored|Yes|Images Stored|Count|Total|Number of Custom Vision images stored.|ApiName, FeatureName, UsageChannel, Region|
|Latency|Yes|Latency|MilliSeconds|Average|Latency in milliseconds.|ApiName, OperationName, Region| |LearnedEvents|Yes|Learned Events|Count|Total|Number of Learned Events.|IsMatchBaseline, Mode, RunId|
-|MatchedRewards|Yes|Matched Rewards|Count|Total| Number of Matched Rewards.|Mode, RunId|
+|LUISSpeechRequests|Yes|LUIS Speech Requests|Count|Total|Number of LUIS speech to intent understanding requests|ApiName, FeatureName, UsageChannel, Region|
+|LUISTextRequests|Yes|LUIS Text Requests|Count|Total|Number of LUIS text requests|ApiName, FeatureName, UsageChannel, Region|
+|MatchedRewards|Yes|Matched Rewards|Count|Total|Number of Matched Rewards.|Mode, RunId|
+|NumberofSpeakerProfiles|Yes|Number of Speaker Profiles|Count|Total|Number of speaker profiles enrolled. Prorated hourly.|ApiName, FeatureName, UsageChannel, Region|
|ObservedRewards|Yes|Observed Rewards|Count|Total|Number of Observed Rewards.|Mode, RunId|
-|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters processed by Immersive Reader.|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedHealthTextRecords|Yes|Processed Health Text Records|Count|Total|Number of health text records processed|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedImages|Yes|Processed Images|Count|Total|Number of images processed|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedPages|Yes|Processed Pages|Count|Total|Number of pages processed|ApiName, FeatureName, UsageChannel, Region|
|ProcessedTextRecords|Yes|Processed Text Records|Count|Total|Count of Text Records.|ApiName, FeatureName, UsageChannel, Region| |ServerErrors|Yes|Server Errors|Count|Total|Number of calls with service internal error (HTTP response code 5xx).|ApiName, OperationName, Region|
-|SpeechSessionDuration|Yes|Speech Session Duration|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
+|SpeakerRecognitionTransactions|Yes|Speaker Recognition Transactions|Count|Total|Number of speaker recognition transactions|ApiName, FeatureName, UsageChannel, Region|
+|SpeechModelHostingHours|Yes|Speech Model Hosting Hours|Count|Total|Number of speech model hosting hours|ApiName, FeatureName, UsageChannel, Region|
+|SpeechSessionDuration|Yes|Speech Session Duration (Deprecated)|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
|SuccessfulCalls|Yes|Successful Calls|Count|Total|Number of successful calls.|ApiName, OperationName, Region|
+|SynthesizedCharacters|Yes|Synthesized Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
+|TextCharactersTranslated|Yes|Text Characters Translated|Count|Total|Number of characters in incoming text translation request.|ApiName, FeatureName, UsageChannel, Region|
+|TextCustomCharactersTranslated|Yes|Text Custom Characters Translated|Count|Total|Number of characters in incoming custom text translation request.|ApiName, FeatureName, UsageChannel, Region|
+|TextTrainedCharacters|Yes|Text Trained Characters|Count|Total|Number of characters trained using text translation.|ApiName, FeatureName, UsageChannel, Region|
|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls.|ApiName, OperationName, Region| |TotalErrors|Yes|Total Errors|Count|Total|Total number of calls with error response (HTTP response code 4xx or 5xx).|ApiName, OperationName, Region| |TotalTokenCalls|Yes|Total Token Calls|Count|Total|Total number of token calls.|ApiName, OperationName, Region|
-|TotalTransactions|Yes|Total Transactions|Count|Total|Total number of transactions.|No Dimensions|
+|TotalTransactions|Yes|Total Transactions (Deprecated)|Count|Total|Total number of transactions.|No Dimensions|
+|VoiceModelHostingHours|Yes|Voice Model Hosting Hours|Count|Total|Number of Hours.|ApiName, FeatureName, UsageChannel, Region|
+|VoiceModelTrainingMinutes|Yes|Voice Model Training Minutes|Count|Total|Number of Minutes.|ApiName, FeatureName, UsageChannel, Region|
## Microsoft.Communication/CommunicationServices
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |APIRequestAuthentication|No|Authentication API Requests|Count|Count|Count of all requests against the Communication Services Authentication endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestChat|Yes|Chat API Requests|Count|Count|Count of all requests against the Communication Services Chat endpoint.|Operation, StatusCode, StatusCodeClass|
+|APIRequestNetworkTraversal|No|Network Traversal API Requests|Count|Count|Count of all requests against the Communication Services Network Traversal endpoint.|Operation, StatusCode, StatusCodeClass|
|APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode|
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out Total|Yes|Network Out Total|Bytes|Total|The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic)|RoleInstanceId, RoleId| |Percentage CPU|Yes|Percentage CPU|Percent|Average|The percentage of allocated compute units that are currently in use by the Virtual Machine(s)|RoleInstanceId, RoleId| - ## microsoft.compute/disks |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|Bytes|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| |Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|Bytes|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| - ## Microsoft.Compute/virtualMachines |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|DataPipelineMessageCount|Yes|Data pipeline message count|Count|Total|The total number of messages sent to the MCVP data pipeline for storage.|VehicleId, DeviceName, IsSuccessful, FailureCategory| |ExtensionInvocationCount|Yes|Extension invocation count|Count|Total|Total number of times an extension was called.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory| |ExtensionInvocationRuntime|Yes|Extension invocation execution time|Milliseconds|Average|Average execution time spent inside an extension in milliseconds.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory|
+|MessagesInCount|Yes|Messages received count|Count|Total|The total number of vehicle-sourced publishes.|VehicleId, DeviceName, IsSuccessful, FailureCategory|
+|MessagesOutCount|Yes|Messages sent count|Count|Total|The total number of cloud-sourced publishes.|VehicleId, DeviceName, IsSuccessful, FailureCategory|
|ProvisionerServiceRequestRuntime|Yes|Vehicle provision execution time|Milliseconds|Average|The average execution time of vehicle provision requests in milliseconds|VehicleId, DeviceName, IsSuccessful, FailureCategory| |ProvisionerServiceRequests|Yes|Vehicle provision service requests|Count|Total|Total number of vehicle provision requests|VehicleId, DeviceName, IsSuccessful, FailureCategory| |StateStoreReadRequestLatency|Yes|State store read execution time|Milliseconds|Average|State store read request execution time average in milliseconds.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AgentPoolCPUTime|Yes|AgentPool CPU Time|Seconds|Total|AgentPool CPU Time in seconds|No Dimensions|
-|RunDuration|Yes|Run Duration|Milliseconds|Total|Run Duration in milliseconds|No Dimensions|
-|SuccessfulPullCount|Yes|Successful Pull Count|Count|Average|Number of successful image pulls|No Dimensions|
-|SuccessfulPushCount|Yes|Successful Push Count|Count|Average|Number of successful image pushes|No Dimensions|
-|TotalPullCount|Yes|Total Pull Count|Count|Average|Number of image pulls in total|No Dimensions|
-|TotalPushCount|Yes|Total Push Count|Count|Average|Number of image pushes in total|No Dimensions|
+|RunDuration|Yes|Run Duration|MilliSeconds|Total|Run Duration in milliseconds|No Dimensions|
+|StorageUsed|Yes|Storage used|Bytes|Average|The amount of storage used by the container registry. For a registry account, it's the sum of capacity used by all the repositories within a registry. It's sum of capacity used by shared layers, manifest files, and replica copies in each of its repositories.|Geolocation|
+|SuccessfulPullCount|Yes|Successful Pull Count|Count|Total|Number of successful image pulls|No Dimensions|
+|SuccessfulPushCount|Yes|Successful Push Count|Count|Total|Number of successful image pushes|No Dimensions|
+|TotalPullCount|Yes|Total Pull Count|Count|Total|Number of image pulls in total|No Dimensions|
+|TotalPushCount|Yes|Total Push Count|Count|Total|Number of image pushes in total|No Dimensions|
## Microsoft.ContainerService/managedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|apiserver_current_inflight_requests|No|Inflight Requests|Count|Average|Maximum number of currently used inflight requests on the apiserver per request kind in the last second|requestKind|
+|cluster_autoscaler_cluster_safe_to_autoscale|No|Cluster Health|Count|Average|Determines whether or not cluster autoscaler will take action on the cluster|No Dimensions|
+|cluster_autoscaler_scale_down_in_cooldown|No|Scale Down Cooldown|Count|Average|Determines if the scale down is in cooldown - No nodes will be removed during this timeframe|No Dimensions|
+|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted|No Dimensions|
+|cluster_autoscaler_unschedulable_pods_count|No|Unschedulable Pods|Count|Average|Number of pods that are currently unschedulable in the cluster|No Dimensions|
|kube_node_status_allocatable_cpu_cores|No|Total number of available cpu cores in a managed cluster|Count|Average|Total number of available cpu cores in a managed cluster|No Dimensions| |kube_node_status_allocatable_memory_bytes|No|Total amount of available memory in a managed cluster|Bytes|Average|Total amount of available memory in a managed cluster|No Dimensions| |kube_node_status_condition|No|Statuses for various node conditions|Count|Average|Statuses for various node conditions|condition, status, status2, node| |kube_pod_status_phase|No|Number of pods by phase|Count|Average|Number of pods by phase|phase, namespace, pod| |kube_pod_status_ready|No|Number of pods in Ready state|Count|Average|Number of pods in Ready state|namespace, pod, condition|
+|node_cpu_usage_millicores|Yes|CPU Usage Millicores|MilliCores|Average|Aggregated measurement of CPU utilization in millicores across the cluster|node, nodepool|
+|node_cpu_usage_percentage|Yes|CPU Usage Percentage|Percent|Average|Aggregated average CPU utilization measured in percentage across the cluster|node, nodepool|
+|node_disk_usage_bytes|Yes|Disk Used Bytes|Bytes|Average|Disk space used in bytes by device|node, nodepool, device|
+|node_disk_usage_percentage|Yes|Disk Used Percentage|Percent|Average|Disk space used in percent by device|node, nodepool, device|
+|node_memory_rss_bytes|Yes|Memory RSS Bytes|Bytes|Average|Container RSS memory used in bytes|node, nodepool|
+|node_memory_rss_percentage|Yes|Memory RSS Percentage|Percent|Average|Container RSS memory used in percent|node, nodepool|
+|node_memory_working_set_bytes|Yes|Memory Working Set Bytes|Bytes|Average|Container working set memory used in bytes|node, nodepool|
+|node_memory_working_set_percentage|Yes|Memory Working Set Percentage|Percent|Average|Container working set memory used in percent|node, nodepool|
+|node_network_in_bytes|Yes|Network In Bytes|Bytes|Average|Network received bytes|node, nodepool|
+|node_network_out_bytes|Yes|Network Out Bytes|Bytes|Average|Network transmitted bytes|node, nodepool|
## Microsoft.CustomProviders/resourceproviders
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |active_connections|Yes|Active Connections|Count|Average|Active Connections|ServerName|
+|apps_reserved_memory_percent|Yes|Reserved Memory percent|Percent|Average|Percentage of Commit Memory Limit Reserved by Applications|ServerName|
|cpu_percent|Yes|CPU percent|Percent|Average|CPU percent|ServerName| |iops|Yes|IOPS|Count|Average|IO operations per second|ServerName| |memory_percent|Yes|Memory percent|Percent|Average|Memory percent|ServerName|
This latest update adds a new column and reorders the metrics to be alphabetical
|d2c.endpoints.egress.storage|Yes|Routing: messages delivered to storage|Count|Total|The number of times IoT Hub routing successfully delivered messages to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.blobs|Yes|Routing: blobs delivered to storage|Count|Total|The number of times IoT Hub routing delivered blobs to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.bytes|Yes|Routing: data delivered to storage|Bytes|Total|The amount of data (bytes) IoT Hub routing delivered to storage endpoints.|No Dimensions|
-|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
-|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
-|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped|Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
+|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
+|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned|Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule).|No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|devices.connectedDevices.allProtocol|Yes|Connected devices (deprecated) |Count|Total|Number of devices connected to your IoT hub|No Dimensions| |devices.totalDevices|Yes|Total devices (deprecated)|Count|Total|Number of devices registered to your IoT hub|No Dimensions| |EventGridDeliveries|Yes|Event Grid deliveries|Count|Total|The number of IoT Hub events published to Event Grid. Use the Result dimension for the number of successful and failed requests. EventType dimension shows the type of event (https://aka.ms/ioteventgrid).|Result, EventType|
-|EventGridLatency|Yes|Event Grid latency|Milliseconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
+|EventGridLatency|Yes|Event Grid latency|MilliSeconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
|jobs.cancelJob.failure|Yes|Failed job cancellations|Count|Total|The count of all failed calls to cancel a job.|No Dimensions| |jobs.cancelJob.success|Yes|Successful job cancellations|Count|Total|The count of all successful calls to cancel a job.|No Dimensions| |jobs.completed|Yes|Completed jobs|Count|Total|The count of all completed jobs.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|jobs.queryJobs.success|Yes|Successful job queries|Count|Total|The count of all successful calls to query jobs.|No Dimensions| |RoutingDataSizeInBytesDelivered|Yes|Routing Delivery Message Size in Bytes (preview)|Bytes|Total|The total size in bytes of messages delivered by IoT hub to an endpoint. You can use the EndpointName and EndpointType dimensions to view the size of the messages in bytes delivered to your different endpoints. The metric value increases for every message delivered, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, RoutingSource| |RoutingDeliveries|Yes|Routing Deliveries (preview)|Count|Total|The number of times IoT Hub attempted to deliver messages to all endpoints using routing. To see the number of successful or failed attempts, use the Result dimension. To see the reason of failure, like invalid, dropped, or orphaned, use the FailureReasonCategory dimension. You can also use the EndpointName and EndpointType dimensions to understand how many messages were delivered to your different endpoints. The metric value increases by one for each delivery attempt, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, FailureReasonCategory, Result, RoutingSource|
-|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
+|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
|totalDeviceCount|No|Total devices|Count|Average|Number of devices registered to your IoT hub|No Dimensions| |twinQueries.failure|Yes|Failed twin queries|Count|Total|The count of all failed twin queries.|No Dimensions| |twinQueries.resultSize|Yes|Twin queries result size|Bytes|Average|The average, min, and max of the result size of all successful twin queries.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BillingApiOperations|Yes|Billing API Operations|Count|Total|Billing metric for the count of all API requests made against the Azure Digital Twins service.|MeterId| |BillingMessagesProcessed|Yes|Billing Messages Processed|Count|Total|Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.|MeterId| |BillingQueryUnits|Yes|Billing Query Units|Count|Total|The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries.|MeterId|
+|DataHistoryRouting|Yes|Data History Messages Routed (preview)|Count|Total|The number of messages routed to a time series database.|EndpointType, Result|
+|DataHistoryRoutingFailureRate|Yes|Data History Routing Failure Rate (preview)|Percent|Average|The percentage of events that result in an error as they are routed from Azure Digital Twins to a time series database.|EndpointType|
+|DataHistoryRoutingLatency|Yes|Data History Routing Latency (preview)|Milliseconds|Average|Time elapsed between an event getting routed from Azure Digital Twins to when it is posted to a time series database.|EndpointType, Result|
|IngressEvents|Yes|Ingress Events|Count|Total|The number of incoming telemetry events into Azure Digital Twins.|Result| |IngressEventsFailureRate|Yes|Ingress Events Failure Rate|Percent|Average|The percentage of incoming telemetry events for which the service returns an internal error (500) response code.|No Dimensions| |IngressEventsLatency|Yes|Ingress Events Latency|Milliseconds|Average|The time from when an event arrives to when it is ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result.|Result|
This latest update adds a new column and reorders the metrics to be alphabetical
|TwinCount|Yes|Twin Count|Count|Total|Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the service limit for max number of twins allowed per instance.|No Dimensions|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/cassandraClusters
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|cassandra_cache_capacity|No|capacity|Bytes|Average|Cache capacity in bytes.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_entries|No|entries|Count|Average|Total number of cache entries.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_hit_rate|No|hit rate|Percent|Average|All time cache hit rate.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_hits|No|hits|Count|Total|Total number of cache hits.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_miss_latency_histogram|No|Cache miss latency histogram|Count|Total|Histogram of cache miss latency (in microseconds).|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_cache_miss_latency_p99|No|miss latency p99 (in microseconds)|Count|Average|p99 Latency of misses.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_requests|No|requests|Count|Total|Total number of cache requests.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_size|No|size|Bytes|Average|Total size of occupied cache, in bytes.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_client_auth_failure|No|auth failure|Count|Total|Number of failed client authentication requests.|cassandra_datacenter, cassandra_node|
+|cassandra_client_auth_success|No|auth success|Count|Total|Number of successful client authentication requests.|cassandra_datacenter, cassandra_node|
+|cassandra_client_request_condition_not_met|No|condition not met|Count|Total|Number of transaction preconditions did not match current values.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_contention_histogram|No|contention|Count|Total|How many contended reads/writes were encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_contention_histogram_p99|No|contention histogram p99|Count|Average|p99 How many contended writes were encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_failures|No|failures|Count|Total|Number of transaction failures encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_latency_histogram|No|Client request latency histogram|Count|Total|Histogram of client request latency (in microseconds).|cassandra_datacenter, cassandra_node, quantile, request_type|
+|cassandra_client_request_latency_p99|No|latency p99 (in microseconds)|Count|Average|p99 Latency.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_timeouts|No|timeouts|Count|Total|Number of timeouts encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_unfinished_commit|No|unfinished commit|Count|Total|Number of transactions that were committed on write.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_commit_log_waiting_on_commit_latency_histogram|No|waiting on commit latency histogram|Count|Total|Histogram of the time spent waiting on CL fsync (in microseconds); for Periodic this is only occurs when the sync is lagging its sync interval.|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_cql_prepared_statements_executed|No|prepared statements executed|Count|Total|Number of prepared statements executed.|cassandra_datacenter, cassandra_node|
+|cassandra_cql_regular_statements_executed|No|regular statements executed|Count|Total|Number of non prepared statements executed.|cassandra_datacenter, cassandra_node|
+|cassandra_jvm_gc_count|No|gc count|Count|Average|Total number of collections that have occurred.|cassandra_datacenter, cassandra_node|
+|cassandra_jvm_gc_time|No|gc time|MilliSeconds|Average|Approximate accumulated collection elapsed time.|cassandra_datacenter, cassandra_node|
+|cassandra_table_all_memtables_live_data_size|No|all memtables live data size|Count|Average|Total amount of live data stored in the memtables (2i and pending flush memtables included) that resides off-heap, excluding any data structure overhead.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_all_memtables_off_heap_size|No|all memtables off heap size|Count|Average|Total amount of data stored in the memtables (2i and pending flush memtables included) that resides off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_disk_space_used|No|bloom filter disk space used|Bytes|Average|Disk space used by bloom filter (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_false_positives|No|bloom filter false positives|Count|Average|Number of false positives on table's bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_false_ratio|No|bloom filter false ratio|Percent|Average|False positive ratio of table's bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_off_heap_memory_used|No|bloom filter off-heap memory used|Count|Average|Off-heap memory used by bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bytes_flushed|No|bytes flushed|Bytes|Total|Total number of bytes flushed since server [re]start.|cassandra_datacenter, cassandra_node|
+|cassandra_table_cas_commit|No|cas commit (in microseconds)|Count|Total|Latency of paxos commit round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_commit_p99|No|cas commit p99 (in microseconds)|Count|Average|p99 Latency of paxos commit round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_prepare|No|cas prepare (in microseconds)|Count|Total|Latency of paxos prepare round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_prepare_p99|No|cas prepare p99 (in microseconds)|Count|Average|p99 Latency of paxos prepare round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_propose|No|cas propose (in microseconds)|Count|Total|Latency of paxos propose round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_propose_p99|No|cas propose p99 (in microseconds)|Count|Average|p99 Latency of paxos propose round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_col_update_time_delta_histogram|No|col update time delta|Count|Total|Column update time delta on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_col_update_time_delta_histogram_p99|No|col update time delta p99|Count|Average|p99 Column update time delta on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_compaction_bytes_written|No|compaction bytes written|Bytes|Total|Total number of bytes written by compaction since server [re]start.|cassandra_datacenter, cassandra_node|
+|cassandra_table_compression_metadata_off_heap_memory_used|No|compression metadata off heap memory used|Count|Average|Off-heap memory used by compression meta data.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_compression_ratio|No|compression ratio|Percent|Average|Current compression ratio for all SSTables.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_read_latency|No|coordinator read latency (in microseconds)|Count|Total|Coordinator read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_read_latency_p99|No|coordinator read latency p99 (in microseconds)|Count|Average|p99 Coordinator read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_scan_latency|No|coordinator scan latency (in microseconds)|Count|Total|Coordinator range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_scan_latency_p99|No|coordinator scan latency p99 (in microseconds)|Count|Average|p99 Coordinator range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_dropped_mutations|No|dropped mutations|Count|Total|Number of dropped mutations on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_column_count_histogram|No|estimated column count|Count|Total|Estimated number of columns.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_column_count_histogram_p99|No|estimated column count p99|Count|Average|p99 Estimated number of columns.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_partition_count|No|estimated partition count|Count|Average|Approximate number of keys in table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_partition_size_histogram|No|estimated partition size histogram|Bytes|Total|Histogram of estimated partition size.|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_table_estimated_partition_size_histogram_p99|No|estimated partition size p99|Bytes|Average|p99 Estimated partition size (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_index_summary_off_heap_memory_used|No|index summary off heap memory used|Count|Average|Off-heap memory used by index summary.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_key_cache_hit_rate|No|key cache hit rate|Percent|Average|Key cache hit rate for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_disk_space_used|No|live disk space used|Bytes|Total|Disk space used by SSTables belonging to this table (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_scanned_histogram|No|live scanned|Count|Total|Live cells scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_scanned_histogram_p99|No|live scanned p99|Count|Average|p99 Live cells scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_sstable_count|No|live sstable count|Count|Average|Number of SSTables on disk for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_max_partition_size|No|max partition size|Bytes|Average|Size of the largest compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_mean_partition_size|No|mean partition size|Bytes|Average|Size of the average compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_columns_count|No|memtable columns count|Count|Average|Total number of columns present in the memtable.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_off_heap_size|No|memtable off heap size|Count|Average|Total amount of data stored in the memtable that resides off-heap, including column related overhead and partitions overwritten.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_on_heap_size|No|memtable on heap size|Count|Average|Total amount of data stored in the memtable that resides on-heap, including column related overhead and partitions overwritten.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_switch_count|No|memtable switch count|Count|Total|Number of times flush has resulted in the memtable being switched out.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_min_partition_size|No|min partition size|Bytes|Average|Size of the smallest compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_pending_compactions|No|pending compactions|Count|Average|Estimate of number of pending compactions for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_pending_flushes|No|pending flushes|Count|Total|Estimated number of flush tasks pending for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_percent_repaired|No|percent repaired|Percent|Average|Percent of table data that is repaired on disk.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_range_latency|No|range latency (in microseconds)|Count|Total|Local range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_range_latency_p99|No|range latency p99 (in microseconds)|Count|Average|p99 Local range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_read_latency|No|read latency (in microseconds)|Count|Total|Local read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_read_latency_p99|No|read latency p99 (in microseconds)|Count|Average|p99 Local read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_row_cache_hit|No|row cache hit|Count|Total|Number of table row cache hits.|cassandra_datacenter, cassandra_node|
+|cassandra_table_row_cache_hit_out_of_range|No|row cache hit out of range|Count|Total|Number of table row cache hits that do not satisfy the query filter, thus went to disk.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_row_cache_miss|No|row cache miss|Count|Total|Number of table row cache misses.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_speculative_retries|No|speculative retries|Count|Total|Number of times speculative retries were sent for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_sstables_per_read_histogram|No|sstables per read|Count|Total|Number of sstable data files accessed per single partition read. SSTables skipped due to Bloom Filters, min-max key or partition index lookup are not taken into account.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_sstables_per_read_histogram_p99|No|sstables per read p99|Count|Average|p99 Number of sstable data files accessed per single partition read. SSTables skipped due to Bloom Filters, min-max key or partition index lookup are not taken into account.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_tombstone_scanned_histogram|No|tombstone scanned|Count|Total|Tombstones scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_tombstone_scanned_histogram_p99|No|tombstone scanned p99|Count|Average|p99 Tombstones scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_total_disk_space_used|No|total disk space used|Count|Total|Total disk space used by SSTables belonging to this table, including obsolete ones waiting to be GC'd.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_lock_acquire_time|No|view lock acquire time|Count|Total|Time taken acquiring a partition lock for materialized view updates on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_lock_acquire_time_p99|No|view lock acquire time p99|Count|Average|p99 Time taken acquiring a partition lock for materialized view updates on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_read_time|No|view read time|Count|Total|Time taken during the local read of a materialized view update.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_read_time_p99|No|view read time p99|Count|Average|p99 Time taken during the local read of a materialized view update.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_waiting_on_free_memtable_space|No|waiting on free memtable space|Count|Total|Time spent waiting for free memtable space, either on- or off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_waiting_on_free_memtable_space_p99|No|waiting on free memtable space p99|Count|Average|p99 Time spent waiting for free memtable space, either on- or off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_write_latency|No|write latency (in microseconds)|Count|Total|Local write latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_write_latency_p99|No|write latency p99 (in microseconds)|Count|Average|p99 Local write latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_thread_pools_active_tasks|No|active tasks|Count|Average|Number of tasks being actively worked on by this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_currently_blocked_tasks|No|currently blocked tasks|Count|Total|Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_max_pool_size|No|max pool size|Count|Average|The maximum number of threads in this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_pending_tasks|No|pending tasks|Count|Average|Number of queued tasks queued up on this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_total_blocked_tasks|No|total blocked tasks|Count|Total|Number of tasks that were blocked due to queue saturation.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
++
+## Microsoft.DocumentDB/DatabaseAccounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc [https://docs.microsoft.com/azure/cosmos-db/concepts-limits](../../cosmos-db/concepts-limits.md). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
This latest update adds a new column and reorders the metrics to be alphabetical
|DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, MetricType| |DedicatedGatewayAverageMemoryUsage|No|DedicatedGatewayAverageMemoryUsage|Bytes|Average|Average memory usage across dedicated gateway instances, which is used for both routing requests and caching data|Region| |DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType|
-|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region|
+|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region, CacheHit|
|DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes, 1 hour and 1 day granularity|CollectionName, DatabaseName, Region| |DocumentQuota|No|Document Quota|Bytes|Total|Total storage quota reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|GremlinGraphDelete|No|Gremlin Graph Deleted|Count|Count|Gremlin Graph Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType| |GremlinGraphThroughputUpdate|No|Gremlin Graph Throughput Updated|Count|Count|Gremlin Graph Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest| |GremlinGraphUpdate|No|Gremlin Graph Updated|Count|Count|Gremlin Graph Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|GremlinRequestCharges|No|Gremlin Request Charges|Count|Total|Request Units consumed for Gremlin requests made|APIType, DatabaseName, CollectionName, Region|
+|GremlinRequests|No|Gremlin Requests|Count|Count|Number of Gremlin requests made|APIType, DatabaseName, CollectionName, Region, ErrorCode|
|IndexUsage|No|Index Usage|Bytes|Total|Total index usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |IntegratedCacheEvictedEntriesSize|No|IntegratedCacheEvictedEntriesSize|Bytes|Average|Size of the entries evicted from the integrated cache|Region| |IntegratedCacheItemExpirationCount|No|IntegratedCacheItemExpirationCount|Count|Average|Number of items evicted from the integrated cache due to TTL expiration|Region, CacheEntryType|
This latest update adds a new column and reorders the metrics to be alphabetical
|MongoDBDatabaseUpdate|No|Mongo Database Updated|Count|Count|Mongo Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |MongoRequestCharge|Yes|Mongo Request Charge|Count|Total|Mongo Request Units Consumed|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |MongoRequests|Yes|Mongo Requests|Count|Count|Number of Mongo Requests Made|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status|
-|MongoRequestsCount|No|(deprecated) Mongo Request Rate|CountPerSecond|Average|Mongo request Count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsDelete|No|(deprecated) Mongo Delete Request Rate|CountPerSecond|Average|Mongo Delete request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsInsert|No|(deprecated) Mongo Insert Request Rate|CountPerSecond|Average|Mongo Insert count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsQuery|No|(deprecated) Mongo Query Request Rate|CountPerSecond|Average|Mongo Query request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsUpdate|No|(deprecated) Mongo Update Request Rate|CountPerSecond|Average|Mongo Update request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId|
+|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId, CollectionRid|
|ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions| |RemoveRegion|Yes|Region Removed|Count|Count|Region Removed|Region| |ReplicationLatency|Yes|P99 Replication Latency|MilliSeconds|Average|P99 Replication Latency across source and target regions for geo-enabled account|SourceRegion, TargetRegion| |ServerSideLatency|No|Server Side Latency|MilliSeconds|Average|Server Side Latency|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType|
+|ServerSideLatencyDirect|No|Server Side Latency Direct|MilliSeconds|Average|Server Side Latency in Direct Connection Mode|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType, APIType|
+|ServerSideLatencyGateway|No|Server Side Latency Gateway|MilliSeconds|Average|Server Side Latency in Gateway Connection Mode|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType, APIType|
|ServiceAvailability|No|Service Availability|Percent|Average|Account requests availability at one hour, day or month granularity|No Dimensions| |SqlContainerCreate|No|Sql Container Created|Count|Count|Sql Container Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |SqlContainerDelete|No|Sql Container Deleted|Count|Count|Sql Container Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
This latest update adds a new column and reorders the metrics to be alphabetical
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DeadLetterReason| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, Error, ErrorType| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|Topic, EventSubscriptionName, DomainEventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|Topic, EventSubscriptionName, DomainEventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DropReason| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName| |PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|Topic, ErrorType, Error| |PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|Topic|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
## Microsoft.EventGrid/eventSubscriptions
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
+|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this partner namespace|ErrorType, Error|
+|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this partner namespace|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
+|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the partner topics|No Dimensions|
## Microsoft.EventGrid/partnerTopics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|EventSubscriptionName|
+|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this partner topic.|EventSubscriptionName|
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
+|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this partner topic|No Dimensions|
+|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this partner topic|No Dimensions|
## Microsoft.EventGrid/systemTopics
This latest update adds a new column and reorders the metrics to be alphabetical
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName| |PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error| |PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|No Dimensions|
-## Microsoft.EventHub/namespaces
+## Microsoft.EventHub/Namespaces
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|INREQS|Yes|Incoming Requests (Deprecated)|Count|Total|Total incoming send requests for a namespace (Deprecated)|No Dimensions| |INTERR|Yes|Internal Server Errors (Deprecated)|Count|Total|Total internal server errors for a namespace (Deprecated)|No Dimensions| |MISCERR|Yes|Other Errors (Deprecated)|Count|Total|Total failed requests for a namespace (Deprecated)|No Dimensions|
-|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
-|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|Replica|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|Replica|
|OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|EntityName| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|EntityName| |OUTMSGS|Yes|Outgoing Messages (obsolete) (Deprecated)|Count|Total|Total outgoing messages for a namespace. This metric is deprecated. Please use Outgoing Messages metric instead (Deprecated)|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, |
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, |
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, OperationResult|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, OperationResult|
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, |
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, OperationResult|
|SUCCREQ|Yes|Successful Requests (Deprecated)|Count|Total|Total successful requests for a namespace (Deprecated)|No Dimensions| |SVRBSY|Yes|Server Busy Errors (Deprecated)|Count|Total|Total server busy errors for a namespace (Deprecated)|No Dimensions|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, |
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, |
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, OperationResult|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, OperationResult|
## Microsoft.HDInsight/clusters
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
+## Microsoft.HealthcareApis/workspaces/fhirservices
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Availability|Yes|Availability|Percent|Average|The availability rate of the service.|No Dimensions|
+|TotalDataSize|Yes|Total Data Size|Bytes|Total|Total size of the data in the backing database, in bytes.|No Dimensions|
+|TotalErrors|Yes|Total Errors|Count|Sum|The total number of internal server errors encountered by the service.|Protocol, StatusCode, StatusCodeClass, StatusCodeText|
+|TotalLatency|Yes|Total Latency|Milliseconds|Average|The response latency of the service.|Protocol|
+|TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
++ ## Microsoft.HealthcareApis/workspaces/iotconnectors |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |DeviceEvent|Yes|Number of Incoming Messages|Count|Sum|The total number of messages received by the Azure IoT Connector for FHIR prior to any normalization.|Operation, ResourceName| |DeviceEventProcessingLatencyMs|Yes|Average Normalize Stage Latency|Milliseconds|Average|The average time between an event's ingestion time and the time the event is processed for normalization.|Operation, ResourceName|
+|IotConnectorStatus|Yes|IotConnector Health Status|Percent|Average|Health checks which indicate the overall health of the IoT Connector.|Operation, ResourceName, HealthCheckName|
|Measurement|Yes|Number of Measurements|Count|Sum|The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR.|Operation, ResourceName| |MeasurementGroup|Yes|Number of Message Groups|Count|Sum|The total number of unique groupings of measurements across type, device, patient, and configured time period generated by the FHIR conversion stage.|Operation, ResourceName| |MeasurementIngestionLatencyMs|Yes|Average Group Stage Latency|Milliseconds|Average|The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage.|Operation, ResourceName| |NormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ResourceName| |TotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ResourceName| - ## microsoft.hybridnetwork/networkfunctions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName| - ## microsoft.insights/autoscalesettings |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|provisionedDeviceCount|No|Total Provisioned Devices|Count|Average|Number of devices provisioned in IoT Central application|No Dimensions|
-## Microsoft.KeyVault/managedHSMs
+## microsoft.keyvault/managedhsms
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Availability|No|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|Availability|No|Overall Service Availability|Percent|Average|Service requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName| |ServiceApiLatency|No|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
-|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Gets the available metrics for a Managed HSM pool|ActivityType, ActivityName, StatusCode, StatusCodeClass|
## Microsoft.KeyVault/vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass| |SaturationShoebox|No|Overall Vault Saturation|Percent|Average|Vault capacity used|ActivityType, ActivityName, TransactionType| |ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName|
-|ServiceApiLatency|Yes|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|ServiceApiLatency|Yes|Overall Service Api Latency|MilliSeconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Number of total service api results|ActivityType, ActivityName, StatusCode, StatusCodeClass| - ## microsoft.kubernetes/connectedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |capacity_cpu_cores|Yes|Total number of cpu cores in a connected cluster|Count|Total|Total number of cpu cores in a connected cluster|No Dimensions| - ## Microsoft.Kusto/Clusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/Workflows
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|CpuUtilization|Yes|CpuUtilization|Count|Average|Percentage of utilization on a CPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, ClusterName| |CpuUtilizationMillicores|Yes|CpuUtilizationMillicores|Count|Average|Utilization of a CPU node in millicores. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName| |CpuUtilizationPercentage|Yes|CpuUtilizationPercentage|Count|Average|Utilization percentage of a CPU node. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskAvailMegabytes|Yes|DiskAvailMegabytes|Count|Average|Available disk space in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskReadMegabytes|Yes|DiskReadMegabytes|Count|Average|Data read from disk in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskUsedMegabytes|Yes|DiskUsedMegabytes|Count|Average|Used disk space in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskWriteMegabytes|Yes|DiskWriteMegabytes|Count|Average|Data written into disk in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
|Errors|Yes|Errors|Count|Total|Number of run errors in this workspace. Count is updated whenever run encounters an error.|Scenario| |Failed Runs|Yes|Failed Runs|Count|Total|Number of runs failed for this workspace. Count is updated when a run fails.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Finalizing Runs|Yes|Finalizing Runs|Count|Total|Number of runs entered finalizing state for this workspace. Count is updated when a run has completed but output collection still in progress.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName|
This latest update adds a new column and reorders the metrics to be alphabetical
|ContentKeyPolicyCount|Yes|Content Key Policy count|Count|Average|How many content key policies are already created in current media service account|No Dimensions| |ContentKeyPolicyQuota|Yes|Content Key Policy quota|Count|Average|How many content key polices are allowed for current media service account|No Dimensions| |ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions|
+|JobQuota|Yes|Job quota|Count|Average|The Job quota for the current media service account.|No Dimensions|
+|JobsScheduled|Yes|Jobs Scheduled|Count|Average|The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted.|No Dimensions|
|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Average|The maximum number of live events allowed in the current media services account|No Dimensions| |MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Average|The maximum number of running live events allowed in the current media services account|No Dimensions| |RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions| |StreamingPolicyCount|Yes|Streaming Policy count|Count|Average|How many streaming policies are already created in current media service account|No Dimensions| |StreamingPolicyQuota|Yes|Streaming Policy quota|Count|Average|How many streaming policies are allowed for current media service account|No Dimensions| |StreamingPolicyQuotaUsedPercentage|Yes|Streaming Policy quota used percentage|Percent|Average|Streaming Policy used percentage in current media service account|No Dimensions|
+|TransformQuota|Yes|Transform quota|Count|Average|The Transform quota for the current media service account.|No Dimensions|
## Microsoft.Media/mediaservices/liveEvents
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|IngressBytes|Yes|Ingress Bytes|Bytes|Total|The number of bytes ingressed by the pipeline node.|PipelineTopology, Pipeline, Node|
+|IngressBytes|Yes|Ingress Bytes|Bytes|Total|The number of bytes ingressed by the pipeline node.|PipelineKind, PipelineTopology, Pipeline, Node|
+|Pipelines|Yes|Pipelines|Count|Total|The number of pipelines of each kind and state|PipelineKind, PipelineTopology, PipelineState|
## Microsoft.MixedReality/remoteRenderingAccounts
This latest update adds a new column and reorders the metrics to be alphabetical
|XregionReplicationTotalTransferBytes|Yes|Volume replication total transfer|Bytes|Average|Cumulative bytes transferred for the relationship.|No Dimensions|
-## Microsoft.Network/applicationGateways
+## Microsoft.Network/applicationgateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|BackendFirstByteResponseTime|No|Backend First Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header, approximating processing time of backend server|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendLastByteResponseTime|No|Backend Last Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendResponseStatus|Yes|Backend Response Status|Count|Total|The number of HTTP response codes generated by the backend members. This does not include any response codes generated by the Application Gateway.|BackendServer, BackendPool, BackendHttpSetting, HttpStatusGroup|
+|BackendTlsNegotiationError|Yes|Backend TLS Connection Errors|Count|Total|TLS Connection Errors for Application Gateway Backend|BackendHttpSetting, BackendPool, ErrorType|
|BlockedCount|Yes|Web Application Firewall Blocked Requests Rule Distribution|Count|Total|Web Application Firewall blocked requests rule distribution|RuleGroup, RuleId| |BlockedReqCount|Yes|Web Application Firewall Blocked Requests Count|Count|Total|Web Application Firewall blocked requests count|No Dimensions| |BytesReceived|Yes|Bytes Received|Bytes|Total|The total number of bytes received by the Application Gateway from the clients|Listener|
This latest update adds a new column and reorders the metrics to be alphabetical
|HealthyHostCount|Yes|Healthy Host Count|Count|Average|Number of healthy backend hosts|BackendSettingsPool| |MatchedCount|Yes|Web Application Firewall Total Rule Distribution|Count|Total|Web Application Firewall Total Rule Distribution for the incoming traffic|RuleGroup, RuleId| |NewConnectionsPerSecond|No|New connections per second|CountPerSecond|Average|New connections per second established with Application Gateway|No Dimensions|
+|RejectedConnections|Yes|Rejected Connections|Count|Total|Count of rejected connections for Application Gateway Frontend|No Dimensions|
|ResponseStatus|Yes|Response Status|Count|Total|Http response status returned by Application Gateway|HttpStatusGroup| |Throughput|No|Throughput|BytesPerSecond|Average|Number of bytes per second the Application Gateway has served|No Dimensions| |TlsProtocol|Yes|Client TLS Protocol|Count|Total|The number of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol.|Listener, TlsProtocol|
This latest update adds a new column and reorders the metrics to be alphabetical
|UnhealthyHostCount|Yes|Unhealthy Host Count|Count|Average|Number of unhealthy backend hosts|BackendSettingsPool|
-## Microsoft.Network/azurefirewalls
+## Microsoft.Network/azureFirewalls
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|ByteCount|Yes|Byte Count|Bytes|Total|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |DipAvailability|Yes|Health Probe Status|Count|Average|Average Load Balancer health probe status per time duration|ProtocolType, BackendPort, FrontendIPAddress, FrontendPort, BackendIPAddress| |PacketCount|Yes|Packet Count|Count|Total|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |SnatConnectionCount|Yes|SNAT Connection Count|Count|Total|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState| |SYNCount|Yes|SYN Count|Count|Total|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|VipAvailability|Yes|Data Path Availability|Count|Average|Average Load Balancer data path availability per time duration|FrontendIPAddress, FrontendPort|
This latest update adds a new column and reorders the metrics to be alphabetical
|TestResult|Yes|Test Result|Count|Average|Connection monitor test result|SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, TestResultCriterion, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet|
-## Microsoft.Network/p2sVpnGateways
+## microsoft.network/p2svpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
## Microsoft.Network/privateDnsZones
This latest update adds a new column and reorders the metrics to be alphabetical
|ProbeAgentCurrentEndpointStateByProfileResourceId|Yes|Endpoint Status by Endpoint|Count|Maximum|1 if an endpoint's probe status is "Enabled", 0 otherwise.|EndpointName| |QpsByEndpoint|Yes|Queries by Endpoint Returned|Count|Total|Number of times a Traffic Manager endpoint was returned in the given time frame|EndpointName| - ## Microsoft.Network/virtualHubs |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype| -
-## Microsoft.Network/virtualNetworkGateways
+## microsoft.network/virtualnetworkgateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
+|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
-|P2SConnectionCount|Yes|P2S Connection Count|Count|Maximum|Point-to-site connection count of a gateway|Protocol, Instance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.Network/virtualNetworks
This latest update adds a new column and reorders the metrics to be alphabetical
|PeeringAvailability|Yes|Bgp Availability|Percent|Average|BGP Availability between VirtualRouter and remote peers|Peer|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processes|Yes|Processes|Count|Average|Average_Processes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|Average_Used MBytes Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Users|Yes|Users|Count|Average|Average_Users|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Event|Yes|Event|Count|Average|Event|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
-|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat|Computer, OSType, Version, SourceComputerId|
-|Update|Yes|Update|Count|Average|Update|Computer, Product, Classification, UpdateState, Optional, Approved|
+|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
+|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, OSType, Version, SourceComputerId|
+|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, Product, Classification, UpdateState, Optional, Approved|
## Microsoft.Peering/peerings
This latest update adds a new column and reorders the metrics to be alphabetical
|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
-## Microsoft.Purview/accounts
+## microsoft.purview/accounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ScanBillingUnits|Yes|Scan Billing Units|Count|Total|Indicates the scan billing units.|ResourceId|
-|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|ResourceId|
-|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|ResourceId|
-|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|ResourceId|
-|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|ResourceId|
+|DataMapCapacityUnits|Yes|Data Map Capacity Units|Count|Total|Indicates Data Map Capacity Units.|No Dimensions|
+|DataMapStorageSize|Yes|Data Map Storage Size|Bytes|Total|Indicates the data map storage size.|No Dimensions|
+|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|No Dimensions|
+|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|No Dimensions|
+|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|No Dimensions|
+|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|No Dimensions|
## Microsoft.RecoveryServices/Vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|ActiveConnections|No|ActiveConnections|Count|Total|Total ActiveConnections for Microsoft.Relay.|EntityName| |ActiveListeners|No|ActiveListeners|Count|Total|Total ActiveListeners for Microsoft.Relay.|EntityName| |BytesTransferred|Yes|BytesTransferred|Bytes|Total|Total BytesTransferred for Microsoft.Relay.|EntityName|
-|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, |
-|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, |
-|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, |
+|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
+|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
+|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
|ListenerConnections-TotalRequests|No|ListenerConnections-TotalRequests|Count|Total|Total ListenerConnections for Microsoft.Relay.|EntityName| |ListenerDisconnects|No|ListenerDisconnects|Count|Total|Total ListenerDisconnects for Microsoft.Relay.|EntityName|
-|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, |
-|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, |
-|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, |
+|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
+|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
+|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
|SenderConnections-TotalRequests|No|SenderConnections-TotalRequests|Count|Total|Total SenderConnections requests for Microsoft.Relay.|EntityName| |SenderDisconnects|No|SenderDisconnects|Count|Total|Total SenderDisconnects for Microsoft.Relay.|EntityName| - ## microsoft.resources/subscriptions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Latency|No|Latency|Seconds|Average|Latency data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| |Traffic|No|Traffic|Count|Count|Traffic data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| - ## Microsoft.Search/searchServices |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|DocumentsProcessedCount|Yes|Document processed count|Count|Total|Number of documents processed|DataSourceName, Failed, IndexerName, IndexName, SkillsetName|
|SearchLatency|Yes|Search Latency|Seconds|Average|Average search latency for the search service|No Dimensions| |SearchQueriesPerSecond|Yes|Search queries per second|CountPerSecond|Average|Search queries per second for the search service|No Dimensions|
+|SkillExecutionCount|Yes|Skill execution invocation count|Count|Total|Number of skill executions|DataSourceName, Failed, IndexerName, SkillName, SkillsetName, SkillType|
|ThrottledSearchQueriesPercentage|Yes|Throttled search queries percentage|Percent|Average|Percentage of search queries that were throttled for the search service|No Dimensions|
-## Microsoft.ServiceBus/namespaces
+## Microsoft.ServiceBus/Namespaces
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AbandonMessage|Yes|Abandoned Messages|Count|Total|Abandoned Messages|EntityName|
+|AbandonMessage|Yes|Abandoned Messages|Count|Total|Count of messages abandoned on a Queue/Topic.|EntityName|
|ActiveConnections|No|ActiveConnections|Count|Total|Total Active Connections for Microsoft.ServiceBus.|No Dimensions| |ActiveMessages|No|Count of active messages in a Queue/Topic.|Count|Average|Count of active messages in a Queue/Topic.|EntityName|
-|CompleteMessage|Yes|Completed Messages|Count|Total|Completed Messages|EntityName|
+|CompleteMessage|Yes|Completed Messages|Count|Total|Count of messages completed on a Queue/Topic.|EntityName|
|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.ServiceBus.|EntityName| |ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.ServiceBus.|EntityName|
-|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|No Dimensions|
+|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|Replica|
|DeadletteredMessages|No|Count of dead-lettered messages in a Queue/Topic.|Count|Average|Count of dead-lettered messages in a Queue/Topic.|EntityName| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.ServiceBus.|EntityName| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.ServiceBus.|EntityName| |Messages|No|Count of messages in a Queue/Topic.|Count|Average|Count of messages in a Queue/Topic.|EntityName|
-|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
-|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|Service bus premium namespace CPU usage metric.|Replica|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Service bus premium namespace memory usage metric.|Replica|
|OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.ServiceBus.|EntityName| |PendingCheckpointOperationCount|No|Pending Checkpoint Operations Count.|Count|Total|Pending Checkpoint Operations Count.|No Dimensions| |ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, |
-|ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, |
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
-|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|No Dimensions|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, OperationResult|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, OperationResult, MessagingErrorSubCode|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
+|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|Replica|
## Microsoft.SignalRService/SignalR |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|ConnectionCloseCount|Yes|Connection Close Count|Count|Total|The count of connections closed by various reasons.|Endpoint, ConnectionCloseCategory|
|ConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|Endpoint|
+|ConnectionOpenCount|Yes|Connection Open Count|Count|Total|The count of new connections opened.|Endpoint|
|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions| |InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |MessageCount|Yes|Message Count|Count|Total|The total amount of messages.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions|
-|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
-|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
+|ConnectionCloseCount|Yes|Connection Close Count|Count|Total|The count of connections closed by various reasons.|ConnectionCloseCategory|
+|ConnectionOpenCount|Yes|Connection Open Count|Count|Total|The count of new connections opened.|No Dimensions|
+|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions|
+|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
## Microsoft.Sql/managedInstances
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ClientIOPS|Yes|Total Client IOPS|Count|Average|The rate of client file operations processed by the Cache.|No Dimensions|
-|ClientLatency|Yes|Average Client Latency|Milliseconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
+|ClientLatency|Yes|Average Client Latency|MilliSeconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
|ClientLockIOPS|Yes|Client Lock IOPS|CountPerSecond|Average|Client file locking operations per second.|No Dimensions| |ClientMetadataReadIOPS|Yes|Client Metadata Read IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data reads, that do not modify persistent state.|No Dimensions| |ClientMetadataWriteIOPS|Yes|Client Metadata Write IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data writes, that modify persistent state.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageTargetFreeWriteSpace|Yes|Storage Target Free Write Space|Bytes|Average|Write space available for dirty data associated with a storage target.|StorageTarget| |StorageTargetHealth|Yes|Storage Target Health|Count|Average|Boolean results of connectivity test between the Cache and Storage Targets.|No Dimensions| |StorageTargetIOPS|Yes|Total StorageTarget IOPS|Count|Average|The rate of all file operations the Cache sends to a particular StorageTarget.|StorageTarget|
-|StorageTargetLatency|Yes|StorageTarget Latency|Milliseconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
+|StorageTargetLatency|Yes|StorageTarget Latency|MilliSeconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
|StorageTargetMetadataReadIOPS|Yes|StorageTarget Metadata Read IOPS|CountPerSecond|Average|The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetMetadataWriteIOPS|Yes|StorageTarget Metadata Write IOPS|CountPerSecond|Average|The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetReadAheadThroughput|Yes|StorageTarget Read Ahead Throughput|BytesPerSecond|Average|The rate the Cache opportunisticly reads data from the StorageTarget.|StorageTarget|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket().|Instance|
-|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB).|Instance|
-|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance|
-|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance|
-|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
-|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance|
-|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions|
-|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
-|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units|Instance|
-|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process.|Instance|
-|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process.|Instance|
-|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status|Instance|
-|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101.|Instance|
-|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300.|Instance|
-|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400.|Instance|
-|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code.|Instance|
-|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code.|Instance|
-|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code.|Instance|
-|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code.|Instance|
-|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500.|Instance|
-|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600.|Instance|
-|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds.|Instance|
-|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations.|Instance|
-|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations.|Instance|
-|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations.|Instance|
-|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations.|Instance|
-|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations.|Instance|
-|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations.|Instance|
-|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB.|Instance|
-|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|Instance|
-|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code.|Instance|
-|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue.|Instance|
-|ScmCpuTime|Yes|ScmCpuTime|Seconds|Total|ScmCpuTime|Instance|
-|ScmPrivateBytes|Yes|ScmPrivateBytes|Bytes|Average|ScmPrivateBytes|Instance|
-|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process.|Instance|
-|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance|
-|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance|
+|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket(). For WebApps and FunctionApps.|Instance|
+|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB). For WebApps and FunctionApps.|Instance|
+|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage). For WebApps only.|Instance|
+|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application. For WebApps and FunctionApps.|Instance|
+|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app. For WebApps and FunctionApps.|No Dimensions|
+|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. For FunctionApps only.|Instance|
+|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units. For FunctionApps only.|Instance|
+|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process. For WebApps and FunctionApps.|Instance|
+|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process. For WebApps and FunctionApps.|Instance|
+|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status. For WebApps and FunctionApps.|Instance|
+|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101. For WebApps and FunctionApps.|Instance|
+|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300. For WebApps and FunctionApps.|Instance|
+|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400. For WebApps and FunctionApps.|Instance|
+|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code. For WebApps and FunctionApps.|Instance|
+|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code. For WebApps and FunctionApps.|Instance|
+|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code. For WebApps and FunctionApps.|Instance|
+|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code. For WebApps and FunctionApps.|Instance|
+|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500. For WebApps and FunctionApps.|Instance|
+|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600. For WebApps and FunctionApps.|Instance|
+|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. For WebApps and FunctionApps.|Instance|
+|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations. For WebApps and FunctionApps.|Instance|
+|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.|Instance|
+|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations. For WebApps and FunctionApps.|Instance|
+|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes. For WebApps and FunctionApps.|Instance|
+|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code. For WebApps and FunctionApps.|Instance|
+|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue. For WebApps and FunctionApps.|Instance|
+|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process. For WebApps and FunctionApps.|Instance|
+|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application. For WebApps and FunctionApps.|Instance|
+|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application. For WebApps and FunctionApps.|Instance|
## Microsoft.Web/sites/slots
This latest update adds a new column and reorders the metrics to be alphabetical
|CdnPercentageOf5XX|Yes|CdnPercentageOf5XX|Percent|Total|CdnPercentageOf5XX|Instance| |CdnRequestCount|Yes|CdnRequestCount|Count|Total|CdnRequestCount|Instance| |CdnResponseSize|Yes|CdnResponseSize|Bytes|Total|CdnResponseSize|Instance|
-|CdnTotalLatency|Yes|CdnTotalLatency|Seconds|Total|CdnTotalLatency|Instance|
+|CdnTotalLatency|Yes|CdnTotalLatency|MilliSeconds|Total|CdnTotalLatency|Instance|
|FunctionErrors|Yes|FunctionErrors|Count|Total|FunctionErrors|Instance| |FunctionHits|Yes|FunctionHits|Count|Total|FunctionHits|Instance| |SiteErrors|Yes|SiteErrors|Count|Total|SiteErrors|Instance| |SiteHits|Yes|SiteHits|Count|Total|SiteHits|Instance| - ## Wandisco.Fusion/migrators |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes|Yes|Total Migrated Data in Bytes|Bytes|Total|This provides a view of the successfully migrated Bytes for a given migrator|No Dimensions| |TotalTransactions|Yes|Total Transactions|Count|Total|This provides a running total of the Data Transactions for which the user could be billed.|No Dimensions| - ## Next steps - [Read about metrics in Azure Monitor](../data-platform.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 02/08/2022 Last updated : 03/03/2022
Some categories might be supported only for specific types of resources. See the
If you think something is missing, you can open a GitHub comment at the bottom of this article. -
-## Microsoft.AAD/domainServices
+## Microsoft.AAD/DomainServices
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |GatewayLogs|Logs related to ApiManagement Gateway|No|
+|WebSocketConnectionLogs|Logs related to Websocket Connections|Yes|
## Microsoft.AppConfiguration/configurationStores
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|AuditEvent message log category.|No|
+|AuditEvent|AuditEvent message log category.|No|
|ERR|Error message log category.|No|
+|ERR|Error message log category.|No|
+|INF|Informational message log category.|No|
|INF|Informational message log category.|No|
+|NotProcessed|Requests which could not be processed.|Yes|
+|Operational|Operational message log category.|Yes|
+|WRN|Warning message log category.|Yes|
|WRN|Warning message log category.|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|DscNodeStatus|Dsc Node Status|No|
-|JobLogs|Job Logs|No|
-|JobStreams|Job Streams|No|
+|AuditEvent|AuditEvent|Yes|
+|DscNodeStatus|DscNodeStatus|No|
+|JobLogs|JobLogs|No|
+|JobStreams|JobStreams|No|
## Microsoft.AutonomousDevelopmentPlatform/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|Request|Request|Yes|
+## Microsoft.AutonomousDevelopmentPlatform/datapools
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
+|Operational|Operational|Yes|
+|Request|Request|Yes|
++
+## Microsoft.AutonomousDevelopmentPlatform/workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
+|Operational|Operational|Yes|
+|Request|Request|Yes|
++ ## microsoft.avs/privateClouds |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|ServiceLog|Service Logs|No|
-## Microsoft.BatchAI/workspaces
-|Category|Category Display Name|Costs To Export|
-||||
-|BaiClusterEvent|BaiClusterEvent|No|
-|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
-|BaiJobEvent|BaiJobEvent|No|
+## Microsoft.BatchAI/workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BaiClusterEvent|BaiClusterEvent|No|
+|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
+|BaiJobEvent|BaiJobEvent|No|
## Microsoft.Blockchain/blockchainMembers
If you think something is missing, you can open a GitHub comment at the bottom o
|CallDiagnostics|Call Diagnostics Logs|Yes| |CallSummary|Call Summary Logs|Yes| |ChatOperational|Operational Chat Logs|No|
+|NetworkTraversalOperational|Operational Network Traversal Logs|Yes|
|SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |cloud-controller-manager|Kubernetes Cloud Controller Manager|Yes| |cluster-autoscaler|Kubernetes Cluster Autoscaler|No|
-|guard|Kubernetes Guard|No|
+|csi-azuredisk-controller|csi-azuredisk-controller|Yes|
+|csi-azurefile-controller|csi-azurefile-controller|Yes|
+|csi-snapshot-controller|csi-snapshot-controller|Yes|
+|guard|guard|No|
|kube-apiserver|Kubernetes API Server|No| |kube-audit|Kubernetes Audit|No| |kube-audit-admin|Kubernetes Audit Admin Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Operational|Operational events|No|
+## Microsoft.Dashboard/grafana
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GrafanaLoginEvents|Grafana Login Events|Yes|
++ ## Microsoft.Databricks/workspaces |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |Audit|Audit Logs|No|
+|JobInfo|Job Info Logs|Yes|
|Requests|Request Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AgentHealthStatus|AgentHealthStatus|No|
+|AgentHealthStatus|AgentHealthStatus|Yes|
+|Checkpoint|Checkpoint|Yes|
|Checkpoint|Checkpoint|No| |Connection|Connection|No|
+|Connection|Connection|Yes|
+|Error|Error|Yes|
|Error|Error|No| |HostRegistration|HostRegistration|No|
+|HostRegistration|HostRegistration|Yes|
+|Management|Management|Yes|
|Management|Management|No|
+|NetworkData|Network Data Logs|Yes|
+|SessionHostManagement|Session Host Management Activity Logs|Yes|
## Microsoft.DesktopVirtualization/scalingplans |Category|Category Display Name|Costs To Export| ||||
-|Autoscaling|Autoscaling logs|Yes|
+|Autoscale|Autoscale logs|Yes|
## Microsoft.DesktopVirtualization/workspaces
If you think something is missing, you can open a GitHub comment at the bottom o
|ResourceProviderOperation|ResourceProviderOperation|Yes|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/cassandraClusters
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraAudit|CassandraAudit|Yes|
+|CassandraLogs|CassandraLogs|Yes|
++
+## Microsoft.DocumentDB/DatabaseAccounts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|DataPlaneRequests|Data plane operations logs|No|
|DeliveryFailures|Delivery Failure Logs|No| |PublishFailures|Publish Failure Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|DeliveryFailures|Delivery Failure Logs|No|
+|DataPlaneRequests|Data plane operations logs|No|
|PublishFailures|Publish Failure Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|DataPlaneRequests|Data plane operations logs|No|
|DeliveryFailures|Delivery Failure Logs|No| |PublishFailures|Publish Failure Logs|No|
-## Microsoft.EventHub/namespaces
+## Microsoft.EventHub/Namespaces
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AuditLogs|FHIR Audit logs|Yes|
-## Microsoft.Insights/AutoscaleSettings
+## microsoft.insights/autoscalesettings
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AppTraces|Traces|No|
-## Microsoft.KeyVault/managedHSMs
+## microsoft.keyvault/managedhsms
|Category|Category Display Name|Costs To Export| ||||
-|AuditEvent|Audit Logs|No|
+|AuditEvent|Audit Event|No|
## Microsoft.KeyVault/vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|Audit Logs|No|
+|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
## Microsoft.Kusto/Clusters
If you think something is missing, you can open a GitHub comment at the bottom o
|IntegrationAccountTrackingEvents|Integration Account track events|No|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/IntegrationAccounts
+
+|Category|Category Display Name|Costs To Export|
+||||
+|IntegrationAccountTrackingEvents|Integration Account track events|No|
++
+## Microsoft.Logic/Workflows
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
+|AmlComputeClusterEvent|AmlComputeClusterEvent|No|
+|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No|
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
+|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
+|AmlComputeJobEvent|AmlComputeJobEvent|No|
|AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
+|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
+|ComputeInstanceEvent|ComputeInstanceEvent|Yes|
+|DataLabelChangeEvent|DataLabelChangeEvent|Yes|
+|DataLabelReadEvent|DataLabelReadEvent|Yes|
+|DataSetChangeEvent|DataSetChangeEvent|Yes|
+|DataSetReadEvent|DataSetReadEvent|Yes|
+|DataStoreChangeEvent|DataStoreChangeEvent|Yes|
+|DataStoreReadEvent|DataStoreReadEvent|Yes|
+|DeploymentEventACI|DeploymentEventACI|Yes|
+|DeploymentEventAKS|DeploymentEventAKS|Yes|
+|DeploymentReadEvent|DeploymentReadEvent|Yes|
+|EnvironmentChangeEvent|EnvironmentChangeEvent|Yes|
+|EnvironmentReadEvent|EnvironmentReadEvent|Yes|
+|InferencingOperationACI|InferencingOperationACI|Yes|
+|InferencingOperationAKS|InferencingOperationAKS|Yes|
+|ModelsActionEvent|ModelsActionEvent|Yes|
+|ModelsChangeEvent|ModelsChangeEvent|Yes|
+|ModelsReadEvent|ModelsReadEvent|Yes|
+|PipelineChangeEvent|PipelineChangeEvent|Yes|
+|PipelineReadEvent|PipelineReadEvent|Yes|
+|RunEvent|RunEvent|Yes|
+|RunReadEvent|RunReadEvent|Yes|
## Microsoft.Media/mediaservices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |KeyDeliveryRequests|Key Delivery Requests|No|
+|MediaAccount|Media Account Health Status|Yes|
## Microsoft.Media/videoanalyzers
If you think something is missing, you can open a GitHub comment at the bottom o
|Operational|Operational Logs|Yes|
-## Microsoft.Network/applicationGateways
+## Microsoft.Network/applicationgateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|ApplicationGatewayPerformanceLog|Application Gateway Performance Log|No|
-## Microsoft.Network/azurefirewalls
+## Microsoft.Network/azureFirewalls
|Category|Category Display Name|Costs To Export| ||||
+|AZFWApplicationRule|Azure Firewall Application Rule Hit|Yes|
+|AZFWApplicationRuleAggregation|Azure Firewall Network Rule Aggregation Hit|Yes|
+|AZFWDnsQuery|Azure Firewall Dns query Hit|Yes|
+|AZFWFqdnResolveFailure|Azure Firewall Fqdn Resolution Failure Hit|Yes|
+|AZFWIdpsSignature|Azure Firewall Idps Signature Hit|Yes|
+|AZFWNatRule|Azure Firewall Nat Rule Hit|Yes|
+|AZFWNatRuleAggregation|Azure Firewall Nat Rule Aggregation Hit|Yes|
+|AZFWNetworkRule|Azure Firewall Network Rule Hit|Yes|
+|AZFWNetworkRuleAggregation|Azure Firewall Application Rule Aggregation Hit|Yes|
+|AZFWThreatIntel|Azure Firewall ThreatIntel Hit|Yes|
|AzureFirewallApplicationRule|Azure Firewall Application Rule|No| |AzureFirewallDnsProxy|Azure Firewall DNS Proxy|No| |AzureFirewallNetworkRule|Azure Firewall Network Rule|No|
-## Microsoft.Network/bastionHosts
+## microsoft.network/bastionHosts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|NetworkSecurityGroupRuleCounter|Network Security Group Rule Counter|No|
-## Microsoft.Network/p2sVpnGateways
+## Microsoft.Network/networkSecurityPerimeters
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NSPInboundAccessAllowed|NSP Inbound Access Allowed.|Yes|
+|NSPInboundAccessDenied|NSP Inbound Access Denied.|Yes|
+|NSPOutboundAccessAllowed|NSP Outbound Access Allowed.|Yes|
+|NSPOutboundAccessDenied|NSP Outbound Access Denied.|Yes|
+|NSPOutboundAttempt|NSP Outbound Attempted.|Yes|
+|PrivateEndPointTraffic|Private Endpoint Traffic|Yes|
+|ResourceInboundAccessAllowed|Resource Inbound Access Allowed.|Yes|
+|ResourceInboundAccessDenied|Resource Inbound Access Denied|Yes|
+|ResourceOutboundAccessAllowed|Resource Outbound Access Allowed|Yes|
+|ResourceOutboundAccessDenied|Resource Outbound Access Denied|Yes|
++
+## microsoft.network/p2svpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|ProbeHealthStatusEvents|Traffic Manager Probe Health Results Event|No|
-## Microsoft.Network/virtualNetworkGateways
+## microsoft.network/virtualnetworkgateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|VMProtectionAlerts|VM protection alerts|No|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|OperationalLogs|Operational Logs|No|
+## Microsoft.OpenLogisticsPlatform/Workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SupplyChainEntityOperations|Supply Chain Entity Operations|Yes|
+|SupplyChainEventLogs|Supply Chain Event logs|Yes|
++ ## Microsoft.OperationalInsights/workspaces |Category|Category Display Name|Costs To Export| ||||
-|Audit|Audit Logs|No|
+|Audit|Audit|No|
## Microsoft.PowerBI/tenants
If you think something is missing, you can open a GitHub comment at the bottom o
|Engine|Engine|No|
-## Microsoft.Purview/accounts
+## microsoft.purview/accounts
|Category|Category Display Name|Costs To Export| ||||
-|ScanStatusLogEvent|ScanStatus|Yes|
+|DataSensitivityLogEvent|DataSensitivity|Yes|
+|ScanStatusLogEvent|ScanStatus|No|
+|Security|PurviewAccountAuditEvents|Yes|
## Microsoft.RecoveryServices/Vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |HybridConnectionsEvent|HybridConnections Events|No|
+|HybridConnectionsLogs|HybridConnectionsLogs|No|
## Microsoft.Search/searchServices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|Analytics|Analytics|Yes|
-|DataConnectors|Data Collection ΓÇô Connectors|Yes|
+|DataConnectors|Data Collection - Connectors|Yes|
-## Microsoft.ServiceBus/namespaces
+## Microsoft.ServiceBus/Namespaces
|Category|Category Display Name|Costs To Export| ||||
+|ApplicationMetricsLogs|Application Metrics Logs|Yes|
|OperationalLogs|Operational Logs|No|
+|RuntimeAuditLogs|Runtime Audit Logs|Yes|
|VNetAndIPFilteringLogs|VNet/IP Filtering Connection Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|AllLogs|Azure Web PubSub Service Logs.|Yes|
+|ConnectivityLogs|Connectivity logs for Azure Web PubSub Service.|Yes|
+|HttpRequestLogs|Http Request logs for Azure Web PubSub Service.|Yes|
+|MessagingLogs|Messaging logs for Azure Web PubSub Service.|Yes|
## microsoft.singularity/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|StorageWrite|StorageWrite|Yes|
+## Microsoft.StorageCache/caches
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AscCacheOperationEvent|HPC Cache operation event|Yes|
+|AscUpgradeEvent|HPC Cache upgrade event|Yes|
+|AscWarningEvent|HPC Cache warning|Yes|
++ ## Microsoft.StreamAnalytics/streamingjobs |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Management|Management|No|
-## microsoft.web/hostingenvironments
+## Microsoft.Web/hostingEnvironments
|Category|Category Display Name|Costs To Export| |||| |AppServiceEnvironmentPlatformLogs|App Service Environment Platform Logs|No|
-## microsoft.web/sites
+## Microsoft.Web/sites
|Category|Category Display Name|Costs To Export| ||||
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
+
+ Title: Analyze usage in Log Analytics workspace in Azure Monitor
+description: Methods and queries to analyze the data in your Log Analytics workspace to help you understand usage and potential cause for high usage.
++ Last updated : 03/24/2022+
+
+# Analyze usage in Log Analytics workspace
+Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data collected by each. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher than expected usage and also to predict your costs as you monitor additional resources and configure different Azure Monitor features.
+
+## Causes for higher than expected usage
+Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+
+ - Set of insights and services enabled and their configuration
+ - Number and type of monitored resources
+ - Volume of data collected from each monitored resource
+
+An unexpected increase in any of these factors can result in increased charges for data retention. The rest of this article provides methods for detecting such a situation and then analyzing collected data to identify and mitigate the source of the increased usage.
+
+## Usage analysis in Azure Monitor
+You should start your analysis with existing tools in Azure Monitor. These require no configuration and can often provide the information you require with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, you use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
+### Log Analytics Workspace Insights
+[Log Analytics Workspace Insights](log-analytics-workspace-insights-overview.md#usage-tab) provides you with a quick understanding of the data in your workspace including the following:
+
+- Data tables ingesting the most data volume in the main table
+- Top resources contributing data
+- Trend of data ingestion
+
+See the **Usage** tab for a breakdown of ingestion by solution and table. This can help you quickly identify the tables that contribute to the bulk of your data volume. It also shows trending of data collection over time to determine if data collection steadily increase over time or suddenly increased in response to a particular configuration change.
+
+Select **Additional Queries** for pre-built queries that help you further understand your data patterns.
+
+### Usage and Estimated Costs
+The *Data ingestion per solution* chart on the [Usage and Estimated Costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
++
+## Log queries
+You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data.
+
+- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there is an ingestion charge. Use this column to filter out non-billable data.
+- [_BilledSize](log-standard-columns.md#_billedsize) provides the size in bytes of the record.
+
+## Data volume by solution
+Analyze the amount of billable data collected by a particular service or solution. These queries use the [Usage](/azure/azure-monitor/reference/tables/usage) table that collects usage data for each table in the workspace.
++
+> [!NOTE]
+> The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When using the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
+
+**Billable data volume by solution over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
+| render columnchart
+```
+
+**Billable data volume by type over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
+| render columnchart
+```
+
+**Billable data volume by solution and type over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000 by Solution, DataType
+| sort by Solution asc, DataType asc
+```
+
+**Billable data volume for specific events**
+If you find that a particular data type is collecting excessive data, you may want to analyze the data in that table to determine particular records that are increasing. This example filters particular event IDs in the `Event` table and then provides a count for each ID. You can modify this queries using the columns from other tables.
+
+```kusto
+Event
+| where TimeGenerated > startofday(ago(31d)) and TimeGenerated < startofday(now())
+| where EventID == 5145 or EventID == 5156
+| where _IsBillable == true
+| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d)
+```
+
+## Data volume by computer
+Analyze the amount of billable data collect from a virtual machine or set of virtual machines. The **Usage** table doesn't include information about data collected from virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this is only for analytics of data trends.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+
+**Billable data volume by computer**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize BillableDataBytes = sum(_BilledSize) by computerName
+| sort by BillableDataBytes desc nulls last
+```
+
+**Count of billable events by computer**
+
+```kusto
+find where TimeGenerated > ago(24h) project _IsBillable, Computer
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize eventCount = count() by computerName
+| sort by eventCount desc nulls last
+```
+
+## Data volume by Azure resource, resource group, or subscription
+Analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+
+**Billable data volume by resource ID**
+
+```kusto
+find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
+| sort by BillableDataBytes nulls last
+```
+
+**Billable data volume by resource group**
+
+```kusto
+find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
+| extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
+| summarize BillableDataBytes = sum(BillableDataBytes) by resourceGroup
+| sort by BillableDataBytes nulls last
+```
+
+It may be helpful to parse the **_ResourceId** :
+
+```Kusto
+| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
+ resourceGroup "/providers/" provider "/" resourceType "/" resourceName
+```
+
+**Billable data volume by subscription**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId
+| sort by BillableDataBytes nulls last
+```
+## Querying for common data types
+If you find that you have excessive billable data for a particular data type, then you may need to perform a query to analyze data in that table. The following queries provide samples for some common data types:
+
+**Security** solution
+
+```kusto
+SecurityEvent
+| summarize AggregatedValue = count() by EventID
+```
+
+**Log Management** solution
+
+```kusto
+Usage
+| where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true
+| summarize AggregatedValue = count() by DataType`
+```
+
+**Perf** data type
+
+```kusto
+Perf
+| summarize AggregatedValue = count() by CounterPath`
+```
+
+```kusto
+Perf
+| summarize AggregatedValue = count() by CounterName
+```
+
+**Event** data type
+
+```kusto
+Event
+| summarize AggregatedValue = count() by EventID
+```
+
+```kusto
+Event
+| summarize AggregatedValue = count() by EventLog, EventLevelName
+```
+
+**Syslog** data type
+
+```kusto
+Syslog
+| summarize AggregatedValue = count() by Facility, SeverityLevel
+```
+
+```kusto
+Syslog
+| summarize AggregatedValue = count() by ProcessName
+```
+
+**AzureDiagnostics** data type
+
+```kusto
+AzureDiagnostics
+| summarize AggregatedValue = count() by ResourceProvider, ResourceId
+```
+
+## Application insights data
+There are two approaches to investigating the amount of data collected for Application Insights, depending on whether you have a classic or workspace-based application. Use the `_BilledSize` property that is available on each ingested event for both workspace-based and classic resources. You can also use aggregated information in the [systemEvents](/azure/azure-monitor/reference/tables/appsystemevents) table for classic resources.
++
+> [!NOTE]
+> The queries in this section will work for both a workspace-based and classic Application Insights resource since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
++
+**Operations generate the most data volume in the last 30 days (workspace-based or classic)**
+
+```kusto
+dependencies
+| where timestamp >= startofday(ago(30d))
+| summarize sum(_BilledSize) by operation_Name
+| render barchart
+```
++
+**Data volume ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= ago(24h)
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
+| summarize sum(BillingTelemetrySizeInBytes)
+```
+
+**Data volume by type ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= startofday(ago(30d))
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
+| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d)
+| render barchart
+```
+
+**Count of event types ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= startofday(ago(30d))
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| summarize count() by BillingTelemetryType, bin(timestamp, 1d)
+| render barchart
+```
++
+### Data volume trends for workspace-based resources
+To look at the data volume trends for [workspace-based Application Insights resources](../app/create-workspace-resource.md), use a query that includes all of the Application insights tables. The following queries use the [tables names specific to workspace-based resources](../app/apm-tables.md#table-schemas).
++
+**Data volume trends for all Application Insights resources in a workspace for the last week**
+
+```kusto
+union (AppAvailabilityResults),
+ (AppBrowserTimings),
+ (AppDependencies),
+ (AppExceptions),
+ (AppEvents),
+ (AppMetrics),
+ (AppPageViews),
+ (AppPerformanceCounters),
+ (AppRequests),
+ (AppSystemEvents),
+ (AppTraces)
+| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
+| summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d)
+| render areachart
+```
+
+**Data volume trends for a specific Application Insights resources in a workspace for the last week**
+
+```kusto
+union (AppAvailabilityResults),
+ (AppBrowserTimings),
+ (AppDependencies),
+ (AppExceptions),
+ (AppEvents),
+ (AppMetrics),
+ (AppPageViews),
+ (AppPerformanceCounters),
+ (AppRequests),
+ (AppSystemEvents),
+ (AppTraces)
+| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
+| where _ResourceId contains "<myAppInsightsResourceName>"
+| summarize sum(_BilledSize) by Type, bin(TimeGenerated, 1d)
+| render areachart
+```
+++
+## Understanding nodes sending data
+If you don't have excessive data from any particular source, you may have an excessive number of agents that are sending data.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
++
+**Count of agent nodes that are sending a heartbeat each day in the last month**
+
+```kusto
+Heartbeat
+| where TimeGenerated > startofday(ago(31d))
+| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
+| render timechart
+```
+
+**Count of nodes sending any data in the last 24 hours**
+
+```kusto
+find where TimeGenerated > ago(24h) project Computer
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize nodes = dcount(computerName)
+```
+
+**Data volume sent by each node in the last 24 hours**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, Computer
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
+```
+
+## Nodes billed by the legacy Per Node pricing tier
+The [legacy Per Node pricing tier](cost-logs.md#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending billed data types since some data types are free. In this case, use the leftmost field of the fully qualified domain name.
+
+The following queries return the count of computers with billed data per hour. The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query. If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the `where` clause.
+
+```kusto
+find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
+| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| where _IsBillable == true
+| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
+| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
+| sort by day asc
+```
+> [!NOTE]
+> There's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
+
+## Security and Automation node counts
+
+**Count of distinct security nodes**
+
+```kusto
+union
+(
+ Heartbeat
+ | where (Solutions has 'security' or Solutions has 'antimalware' or Solutions has 'securitycenter')
+ | project Computer
+),
+(
+ ProtectionStatus
+ | where Computer !in (Heartbeat | project Computer)
+ | project Computer
+)
+| distinct Computer
+| project lowComputer = tolower(Computer)
+| distinct lowComputer
+| count
+```
+
+**Number of distinct Automation nodes**
+
+```kusto
+ ConfigurationData
+ | where (ConfigDataType == "WindowsServices" or ConfigDataType == "Software" or ConfigDataType =="Daemons")
+ | extend lowComputer = tolower(Computer) | summarize by lowComputer
+ | join (
+ Heartbeat
+ | where SCAgentChannel == "Direct"
+ | extend lowComputer = tolower(Computer) | summarize by lowComputer, ComputerEnvironment
+ ) on lowComputer
+ | summarize count() by ComputerEnvironment | sort by ComputerEnvironment asc
+```
+
+## Late-arriving data
+If you observe high data ingestion reported using `Usage` records, but you don't observe the same results summing `_BilledSize` directly on the data type, it's possible that you have late-arriving data. This is when data is ingested with old timestamps.
+
+For example, an agent may have a connectivity issue and send accumulated data once it reconnects. Or a host may have an incorrect time. This can result in an apparent discrepancy between the ingested data reported by the [Usage](/azure/azure-monitor/reference/tables/usage) data type and a query summing [_BilledSize](./log-standard-columns.md#_billedsize) over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
+
+To diagnose late-arriving data issues, use the [_TimeReceived](./log-standard-columns.md#_timereceived) column in addition to the [TimeGenerated](./log-standard-columns.md#timegenerated) column. `_TimeReceived` is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud.
+
+The following example is in response to high ingested data volumes of [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) data on May 2, 2021 to identify the timestamps on this ingested data. The `where TimeGenerated > datetime(1970-01-01)` statement is included to provide the clue to the Log Analytics user interface to look over all data.
+
+```Kusto
+W3CIISLog
+| where TimeGenerated > datetime(1970-01-01)
+| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
+| where _IsBillable == true
+| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
+| sort by TimeGenerated asc
+```
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
+- See [Ingestion-time transformations in Azure Monitor Logs (preview)](ingestion-time-transformations.md) for details on using ingestion-time transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
azure-monitor Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/change-pricing-tier.md
+
+ Title: Change pricing tier for Log Analytics workspace
+description: Details on how to change pricing tier for Log Analytics workspace in Azure Monitor.
+ Last updated : 03/25/2022+
+
+# Change pricing tier for Log Analytics workspace
+Each Log Analytics workspace in Azure Monitor can have a different [pricing tier](cost-logs.md#commitment-tiers). This article describes how to change the pricing tier for a workspace and how to track these changes.
+
+> [!NOTE]
+> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../usage-estimated-costs.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
+## Azure portal
+Use the following steps to change the pricing tier of your workspace using the Azure portal.
+
+1. From the **Log Analytics workspaces** menu, select your workspace, and open **Usage and estimated costs**. This displays a list of each of the pricing tiers available for this workspace.
+
+2. Review the estimated costs for each pricing tier. This estimate assumes that the last 31 days of your usage is typical. In the example below, based on the data patterns from the previous 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day commitment tier (#2).
+
+
+3. Click **Select** if you decide to change the pricing tier after reviewing the estimated costs.
+
+## Azure Resource Manager
+To set the pricing tier using an [Azure Resource Manager](./resource-manager-workspace.md), use the `sku` object to set the pricing tier and the `capacityReservationLevel` parameter if the pricing tier is `capacityresrvation`. For details on this template format, see [Microsoft.OperationalInsights workspaces](/azure/templates/microsoft.operationalinsights/workspaces)
+
+The following sample template sets a workspace to a 300 GB/day commitment tier. To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the SKU), omit the `capacityReservationLevel` property.
+
+```
+{
+ "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "YourWorkspaceName",
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "location": "yourWorkspaceRegion",
+ "properties": {
+ "sku": {
+ "name": "capacityreservation",
+ "capacityReservationLevel": 300
+ }
+ }
+ }
+ ]
+}
+```
+
+See [Deploying the sample templates](../resource-manager-samples.md) if you're not familiar with using Resource Manager templates.
+++
+## Tracking pricing tier changes
+Changes to a workspace's pricing tier are recorded in the [Activity Log](../essentials/activity-log.md). Filter for events with an **Operation** of *Create Workspace*. The event's **Change history** tab will show the old and new pricing tiers in the `properties.sku.name` row. To monitor changes the pricing tier, [create an alert](../alerts/alerts-activity-log.md) for the *Create Workspace* operation.
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
+
+ Title: Azure Monitor Logs pricing details
+description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation.
++ Last updated : 03/24/2022+
+
+# Azure Monitor Logs pricing details
+The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor do not have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs.
+
+## Pricing model
+The default pricing for Log Analytics is a Pay-As-You-Go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+
+- The set of management solutions enabled and their configuration
+- The number and type of monitored resources
+- Type of data collected from each monitored resource
+
+## Data size calculation
+Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
+
+### Excluded columns
+The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
+
+- `_ResourceId`
+- `_SubscriptionId`
+- `_ItemId`
+- `_IsBillable`
+- `_BilledSize`
+- `Type`
++
+### Excluded tables
+Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), [Operation](/azure/azure-monitor/reference/tables/operation). This will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion.
+++
+### Charges for other solutions and services
+Some solutions have more specific policies about free data ingestion. For example [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. Services such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
+
+See the documentation for different services and solutions for any unique billing calculations.
+
+## Commitment Tiers
+In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+
+- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period.
+- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time.
+
+Billing for the commitment tiers is done on a daily basis. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices.
+
+> [!TIP]
+> The **Usage and estimated costs** menu item for each Log Analytics workspace hows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view.
++
+> [!NOTE]
+> Starting June 2, 2021, **Capacity Reservations** were renamed to **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
+
+## Dedicated clusters
+An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features such as [customer-managed keys](customer-managed-keys.md) and use the same commitment tier pricing model as workspaces although they must have a commitment level of at least 500 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There is no Pay-As-You-Go option for clusters.
+
+The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level.
+
+There are two modes of billing for a cluster that you specify when you create the cluster.
+
+- **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster.
+
+- **Workspaces**: Commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.)<br><br>If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier. The unused part of the commitment tier is then billed to the cluster resource.<br><br>If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource.
+
+In cluster billing options, data retention is billed for each workspace. Cluster billing starts when the cluster is created, regardless of whether workspaces are associated with the cluster.
+
+When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on the cluster's commitment tier. Workspaces associated to a cluster no longer have their own pricing tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
+
+If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+
+See [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) for details on creating a dedicated cluster and specifying its billing type.
+
+## Basic Logs
+You can configure certain tables in a Log Analytics workspace to use [Basic Logs](basic-logs-configure.md). Data in these tables has a significantly reduced ingestion charge and a limited retention period. There is a charge though to query against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts.
+
+See [Configure Basic Logs in Azure Monitor](basic-logs-configure.md) for details on Basic Logs including how to configure them and query their data.
+## Data retention and Archive Logs
+In addition to data ingestion, there is a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived Logs have a reduced retention charge, but there is a charge to restore or search against them. Use Archive Logs to reduce your costs for data that you must store for compliance or occasional investigation.
+
+See [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md) for details on data retention and archiving including how to configure these settings and access archived data.
+## Application insights billing
+Since [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers) in addition to Pay-As-You-Go.
+
+Data ingestion and data retention for a [classic Application Insights resource](../app/create-new-resource.md) follow the same Pay-As-You-Go pricing as workspace-based resources, but they can't leverage commitment tiers.
+
+Telemetry from ping tests and multi-step tests is charged the same as data usage for other telemetry from your app. Use of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. There's no data volume charge for using the [Live Metrics Stream](../app/live-stream.md).
+
+See [Application Insights legacy enterprise (per node) pricing tier](../app/legacy-pricing.md) for details about legacy tiers that are available to early adopters of Application Insights.
+
+## Workspaces with Microsoft Sentinel
+When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collected in that workspace is subject to Sentinel charges in addition to Log Analytics charges. For this reason, you will often separate your security and operational data in different workspaces so that you don't incur [Sentinel charges](../../sentinel/billing.md) for operational data. There may be particular situations though where combining this data can actually result in a cost savings. This is typically when you aren't collecting enough security and operational data to each reach a commitment tier on their own, but the combined data is enough to reach a commitment tier. See **Combining your SOC and non-SOC data** in [Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md#decision-tree) for details and a sample cost calculation.
+## Workspaces with Microsoft Defender for Cloud
+[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+
+- [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent)
+- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
+- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)
+- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)
+- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)
+- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)
+- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)
+- [MaliciousIPCommunication](/azure/azure-monitor/reference/tables/maliciousipcommunication)
+- [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog)
+- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)
+- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/enhanced-security-features-overview.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+
+The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+
+## Legacy pricing tiers
+Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the the following legacy pricing tiers:
+
+- Free Trial
+- Standalone (Per GB)
+- Per Node (OMS)
+
+### Free Trial pricing tier
+Workspaces in the **Free Trial** pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)), and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days. Creating new workspaces in (or moving existing workspaces into) the Free Trial pricing tier is possible until July 1, 2022.
+
+### Standalone pricing tier
+Usage on the **Standalone** pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
+
+### Per Node pricing tier
+The **Per Node** pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Usage is reported on three meters:
+
+- **Node**: this is usage for the number of monitored nodes (VMs) in units of node months.
+- **Data Overage per Node**: this is the number of GB of data ingested in excess of the aggregated data allocation.
+- **Data Included per Node**: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Microsoft Defender for Cloud.
+
+> [!TIP]
+> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluate-the-legacy-per-node-pricing-tier) for a recommendation.
+
+Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it can't be moved back. Data ingestion meters on your Azure bill for these legacy tiers are called "Data analyzed."
++
+### Microsoft Defender for Cloud with legacy pricing tiers
+Following are considerations between legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml).
+
+- If the workspace is in the legacy Standard or Premium tier, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion, not per node.
+- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/).
+- In other pricing tiers (including commitment tiers), if Microsoft Defender for Cloud was enabled before June 19, 2017, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion. Otherwise, Microsoft Defender for Cloud is billed using the current Microsoft Defender for Cloud node-based pricing model.
+
+More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
+
+None of the legacy pricing tiers have regional-based pricing.
+
+> [!NOTE]
+> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
+
+## Evaluate the legacy Per Node pricing tier
+It's often difficult to determine whether workspaces with access to the legacy **Per Node** pricing tier are better off in that tier or in a current **Pay-As-You-Go** or **Commitment Tier**. This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier.
+
+The following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days, and for each day, it evaluates which pricing tier would have been optimal. To use the query, you need to specify:
+
+- Whether the workspace is using Microsoft Defender for Cloud by setting **workspaceHasSecurityCenter** to **true** or **false**.
+- Update the prices if you have specific discounts.
+- Specify the number of days to look back and analyze by setting **daysToEvaluate**. This is useful if the query is taking too long trying to look at seven days of data.
+
+```kusto
+// Set these parameters before running query
+// For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
+// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
+let PerNodePrice = 15.; // Monthly price per monitored node
+let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier
+let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/)
+let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier
+let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier
+let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier
+let CommitmentTier400Price = 704.; // Enter your price for the 400 GB/day commitment tier
+let CommitmentTier500Price = 865.; // Enter your price for the 500 GB/day commitment tier
+let CommitmentTier1000Price = 1700.; // Enter your price for the 1000 GB/day commitment tier
+let CommitmentTier2000Price = 3320.; // Enter your price for the 2000 GB/day commitment tier
+let CommitmentTier5000Price = 8050.; // Enter your price for the 5000 GB/day commitment tier
+//
+let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]);
+let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
+let EndDate = startofday(now());
+union *
+| where TimeGenerated >= StartDate and TimeGenerated < EndDate
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize nodesPerHour = dcount(computerName) by bin(TimeGenerated, 1h)
+| summarize nodesPerDay = sum(nodesPerHour)/24. by day=bin(TimeGenerated, 1d)
+| join kind=leftouter (
+ Heartbeat
+ | where TimeGenerated >= StartDate and TimeGenerated < EndDate
+ | where Computer != ""
+ | summarize ASCnodesPerHour = dcount(Computer) by bin(TimeGenerated, 1h)
+ | extend ASCnodesPerHour = iff(workspaceHasSecurityCenter, ASCnodesPerHour, 0)
+ | summarize ASCnodesPerDay = sum(ASCnodesPerHour)/24. by day=bin(TimeGenerated, 1d)
+) on day
+| join (
+ Usage
+ | where TimeGenerated >= StartDate and TimeGenerated < EndDate
+ | where IsBillable == true
+ | extend NonSecurityData = iff(DataType !in (SecurityDataTypes), Quantity, 0.)
+ | extend SecurityData = iff(DataType in (SecurityDataTypes), Quantity, 0.)
+ | summarize DataGB=sum(Quantity)/1000., NonSecurityDataGB=sum(NonSecurityData)/1000., SecurityDataGB=sum(SecurityData)/1000. by day=bin(StartTime, 1d)
+) on day
+| extend AvgGbPerNode = NonSecurityDataGB / nodesPerDay
+| extend OverageGB = iff(workspaceHasSecurityCenter,
+ max_of(DataGB - 0.5*nodesPerDay - 0.5*ASCnodesPerDay, 0.),
+ max_of(DataGB - 0.5*nodesPerDay, 0.))
+| extend PerNodeDailyCost = nodesPerDay * PerNodePrice / 31. + OverageGB * PerNodeOveragePrice
+| extend billableGB = iff(workspaceHasSecurityCenter,
+ (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)), DataGB )
+| extend PerGBDailyCost = billableGB * PerGBPrice
+| extend CommitmentTier100DailyCost = CommitmentTier100Price + max_of(billableGB - 100, 0.)* CommitmentTier100Price/100.
+| extend CommitmentTier200DailyCost = CommitmentTier200Price + max_of(billableGB - 200, 0.)* CommitmentTier200Price/200.
+| extend CommitmentTier300DailyCost = CommitmentTier300Price + max_of(billableGB - 300, 0.)* CommitmentTier300Price/300.
+| extend CommitmentTier400DailyCost = CommitmentTier400Price + max_of(billableGB - 400, 0.)* CommitmentTier400Price/400.
+| extend CommitmentTier500DailyCost = CommitmentTier500Price + max_of(billableGB - 500, 0.)* CommitmentTier500Price/500.
+| extend CommitmentTier1000DailyCost = CommitmentTier1000Price + max_of(billableGB - 1000, 0.)* CommitmentTier1000Price/1000.
+| extend CommitmentTier2000DailyCost = CommitmentTier2000Price + max_of(billableGB - 2000, 0.)* CommitmentTier2000Price/2000.
+| extend CommitmentTier5000DailyCost = CommitmentTier5000Price + max_of(billableGB - 5000, 0.)* CommitmentTier5000Price/5000.
+| extend MinCost = min_of(
+ PerNodeDailyCost,PerGBDailyCost,CommitmentTier100DailyCost,CommitmentTier200DailyCost,
+ CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost)
+| extend Recommendation = case(
+ MinCost == PerNodeDailyCost, "Per node tier",
+ MinCost == PerGBDailyCost, "Pay-as-you-go tier",
+ MinCost == CommitmentTier100DailyCost, "Commitment tier (100 GB/day)",
+ MinCost == CommitmentTier200DailyCost, "Commitment tier (200 GB/day)",
+ MinCost == CommitmentTier300DailyCost, "Commitment tier (300 GB/day)",
+ MinCost == CommitmentTier400DailyCost, "Commitment tier (400 GB/day)",
+ MinCost == CommitmentTier500DailyCost, "Commitment tier (500 GB/day)",
+ MinCost == CommitmentTier1000DailyCost, "Commitment tier (1000 GB/day)",
+ MinCost == CommitmentTier2000DailyCost, "Commitment tier (2000 GB/day)",
+ MinCost == CommitmentTier5000DailyCost, "Commitment tier (5000 GB/day)",
+ "Error"
+)
+| project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost,
+ CommitmentTier100DailyCost, CommitmentTier200DailyCost, CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost, Recommendation
+| sort by day asc
+//| project day, Recommendation // Comment this line to see details
+| sort by day asc
+```
+
+This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases.
+
+> [!NOTE]
+> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
++
+## Next steps
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that may be ingested in a workspace each day.
+- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
Last updated 10/20/2021
> This article describes how to parse text data in a Log Analytics workspace as it's collected. We recommend parsing text data in a query filter after it's collected following the guidance described in [Parse text data in Azure Monitor](./parse-text.md). It provides several advantages over using custom fields. > [!IMPORTANT]
-> Custom fields increases the amount of data collected in the Log Analytics workspace which can increase your cost. See [Manage usage and costs with Azure Monitor Logs](./manage-cost-storage.md#pricing-model) for details.
+> Custom fields increases the amount of data collected in the Log Analytics workspace which can increase your cost. See [Azure Monitor Logs pricing details](cost-logs.md) for details.
The **Custom Fields** feature of Azure Monitor allows you to extend existing records in your Log Analytics workspace by adding your own searchable fields. Custom fields are automatically populated from data extracted from other properties in the same record.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-Managed key is provided on dedicated cluster and these operations are r
- 409 ΓÇö Workspace link or unlink operation in process. ## Next steps -- Learn about [Log Analytics dedicated cluster billing](./manage-cost-storage.md#log-analytics-dedicated-clusters)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)
- Learn about [proper design of Log Analytics workspaces](./design-logs-deployment.md)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
+
+ Title: Set daily cap on Log Analytics workspace
+description: Set a
++ Last updated : 03/28/2022+
+
+# Set daily cap on Log Analytics workspace
+A daily cap on a Log Analytics workspace allows you to avoid unexpected increases in charges for data ingestion by stopping collection of billable data for the rest of the day whenever a specified threshold is reached. This article describes how the daily cap works and how to configure one in your workspace.
+
+> [!IMPORTANT]
+> You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected.
++
+## How the daily cap works
+Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
+
+Data collection resumes at the reset time which is a different hour of the day for each workspace. This reset hour can't be configured. You can optionally create an alert rule to send an alert when this event is created.
+
+> [!NOTE]
+> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
+
+## Application Insights
+You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace.
+
+> [!TIP]
+> If you're concerned about the amount of billable data collected by Application Insights, you should configure [sampling](../app/sampling.md) to tune its data volume to the level you want. Use the daily cap as a safety method in case your application unexpectedly begins to send much higher volumes of telemetry.
+
+The maximum cap for an Application Insights classic resource is 1,000 GB/day unless you request a higher maximum for a high-traffic application. When you create a resource in the Azure portal, the daily cap is set to 100 GB/day. When you create a resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production.
+
+We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
++
+## Determine your daily cap
+To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
+++
+## Workspaces with Microsoft Defender for Cloud
+For workspaces with [Microsoft Defender for Cloud](../../security-center/index.yml), the daily cap doesn't stop the collection of the following data types except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017. :
+
+- WindowsEvent
+- SecurityAlert
+- SecurityBaseline
+- SecurityBaselineSummary
+- SecurityDetection
+- SecurityEvent
+- WindowsFirewall
+- MaliciousIPCommunication
+- LinuxAuditLog
+- SysmonEvent
+- ProtectionStatus
+- Update
+- UpdateSummary
++
+## Set the daily cap
+### Log Analytics workspace
+To set or change the daily cap for a Log Analytics workspace in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select your workspace, and then **Usage and estimated costs**.
+2. Select **Data Cap** at the top of the page.
+3. Select **ON** and then set the data volume limit in GB/day.
++
+> [!NOTE]
+> The reset hour for the workspace is displayed but cannot be configured.
+
+To configure the daily cap with Azure Resource Manager, set the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
++
+### Classic Applications Insights resource
+To set or change the daily cap for a classic Application Insights resource in the Azure portal:
+
+1. From the **Monitor** menu, select **Applications**, your application, and then **Usage and estimated costs**.
+2. Select **Data Cap** at the top of the page.
+3. Set the data volume limit in GB/day.
+4. If you want an email sent to the subscription administrator when the daily limit is reached, then select that option.
+3. Set the daily cap warning level in percentage of the data volume limit.
+4. If you want an email sent to the subscription administrator when the daily cap warning level is reached, then select that option.
++
+To configure the daily cap with Azure Resource Manager, set the `dailyQuota`, `dailyQuotaResetTime` and `warningThreshold` parameters as described at [Workspaces - Create Or Update](../app/powershell.md#set-the-daily-cap).
++
+## Alert when daily cap is reached
+When the daily cap is reached for a Log Analytics workspace, a banner is displayed in the Azure portal, and an event is written to the **Operations** table in the workspace. You should create an alert rule to proactively notify you when this occurs.
+
+To receive an alert when the daily cap is reached, create a [log alert rule](../alerts/alerts-unified-log.md) with the following details.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Signal type | Log |
+| Signal name | Custom log search |
+| Query | `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"` |
+| Measurement | Measure: *Table rows*<br>Aggregation type: Count<br>Aggregation granularity: 5 minutes |
+| Alert Logic | Operator: Greater than<br>Threshold value: 0<br>Frequency of evaluation: 5 minutes |
+| Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Daily data limit reached |
++
+### Classic Application Insights resource
+When the daily cap is reach for a classic Application Insights resource, an event is created in the Azure Activity log with the following signal names. You can also optionally have an email sent to the subscription administrator both when the cap is reached and when a specified percentage of the daily cap has been reached.
+
+* Application Insights component daily cap warning threshold reached
+* Application Insights component daily cap reached
+
+To create an alert when the daily cap is reached, create an [Activity log alert rule](../alerts/alerts-activity-log.md#azure-portal) with the following details.
++
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your application. |
+| **Condition** | |
+| Signal type | Activity Log |
+| Signal name | Application Insights component daily cap reached<br>Or<br>Application Insights component daily cap warning threshold reached |
+| Severity| Warning |
+| Alert rule name | Daily data limit reached |
+++
+## View the effect of the daily cap
+The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value this for your workspace.
+
+```kusto
+let DailyCapResetHour=14;
+Usage
+| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
+| where TimeGenerated > ago(32d)
+| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
+| where StartTime > startofday(ago(31d))
+| where IsBillable
+| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
+| render areachart
+```
+Add `Update` and `UpdateSummary` data types to the `where Datatype` line when the Update Management solution is not running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).)
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
azure-monitor Data Collection Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-troubleshoot.md
+
+ Title: Troubleshoot why data is no longer being collected in Azure Monitor
+description: Steps to take if data is no longer being collected in Log Analytics workspace in Azure Monitor.
+ Last updated : 03/31/2022+
+
+# Troubleshoot why data is no longer being collected in Azure Monitor
+This article provides guidance to detect when data collection in Azure Monitor stops and steps you can take to determine and correct the causes.
++
+## Data collection status
+When data collection in a Log Analytics workspace stops, an event with a type of **Operation** is created in the workspace. Run the following query to check whether you're reaching the daily limit and missing data:
+
+```kusto
+Operation | where OperationCategory == 'Data Collection Status'
+```
+
+When data collection stops, the **OperationStatus** is **Warning**. When data collection starts, the **OperationStatus** is **Succeeded**.
+
+To be notified when data collection stops, use the steps described in the [Alert when daily cap is reached](daily-cap.md#alert-when-daily-cap-is-reached) section. To configure an e-mail, webhook, or runbook action for the alert rule, use the steps described in [create an action group](../alerts/action-groups.md).
+
+## Daily cap was reached
+The [daily cap](daily-cap.md) limits the amount of data that a Log Analytics workspace can collect in a day. If the daily cap is reached, then data collection will stop until the reset time. Either wait for collection to automatically restart, or increase the daily data volume limit.
++
+## Legacy free pricing tier
+If your Log Analytics workspace is on the [legacy Free pricing tier](cost-logs.md#legacy-pricing-tiers) and has collected more than 500 MB of data in a day, data collection stops for the rest of the day. Wait until the following day for collection to automatically restart, or change to a paid pricing tier.
++
+## Workspace reached the data ingestion volume rate
+The [default ingestion volume rate limit](../service-limits.md#log-analytics-workspaces) for data sent from Azure resources using diagnostic settings is approximately 6 GB/min per workspace. This is an approximate value because the actual size can vary between data types, depending on the log length and its compression ratio. This limit doesn't apply to data that's sent from agents or the [Data Collector API](data-collector-api.md).
+
+If you send data at a higher rate to a single workspace, some data is dropped, and an event is sent to the **Operation** table in your workspace every 6 hours while the threshold continues to be exceeded. If your ingestion volume continues to exceed the rate limit or you are expecting to reach it sometime soon, you can request an increase to your workspace by sending an email to LAIngestionRate@microsoft.com or by opening a support request.
+
+Use the following query to retrieve the record that indicates the data ingestion rate limit was reached.
+
+```kusto
+Operation
+| where OperationCategory == "Ingestion"
+| where Detail startswith "The rate of data crossed the threshold"
+```
+
+## Azure subscription is in a suspended state
+You Azure subscription could be in a suspended state for one of the following reasons:
+
+- Free trial ended
+- Azure pass expired
+- Monthly spending limit reached (such as on an MSDN or Visual Studio subscription)
++
+## Limits summary
+
+There are additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. These are documented at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
++
+## Next steps
+
+- See [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
azure-monitor Data Ingestion From File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-from-file.md
+
+ Title: Ingest data from a file using Data Collection Rules (DCR)
+description: Learn how to ingest data from a file into a Log Analytics workspace from files using DCR.
+++ Last updated : 03/21/2022++
+# Customer intent: As a DevOps specialist, I want to ingest external data from a file into a workspace.
+
+# Collect and ingest data from a file using Data Collection Rules (DCR) (Preview)
+
+If you want to collect, log files from your systems using agents, you can use a Data Collection Rules.
+
+You can define how Azure Monitor transforms and stores data ingested into your workspace by setting [Data Collection Rules (DCR)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview). Using DCR lets you ingest data quickly from different log formats.
+
+This tutorial explains how to ingest data from a file into a Log Analytics workspace using DCR.
+
+>[!NOTE]
+> * To use this method, you need to make use of MMA agent. We recommend using AMA that has more native integration with Custom Logs v2 (currently in preview)
+> * Use [Custom Logs v2](https://docs.microsoft.com/azure/azure-monitor/logs/custom-logs-overview) that allows transformations and exports
+
+## Prerequisites
+
+To complete this tutorial, you need a [Log Analytics workspace](quick-create-workspace.md).
+
+## Create a custom log table
+
+>[!TIP]
+> * If you already have a custom log table, you can skip this step and go and set a DCR.
+
+Before you can send data to the workspace, you need to create the custom table that the data will be sent to:
+
+1. Go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace.
+1. Select **Custom Log** > **Add custom log**.
+1. Upload a sample log file.
+1. Select a record delimiter.
+1. Set a collection path:
+ 1. Select Windows or Linux to specify which path format you're adding.
+ 1. Set the path on to the custom log file on your machine.
+1. Specify a name for the table. Azure Monitor automatically adds the *_CL* (custom log) suffix to the table name.
+1. Select **Create**.
+## Create a Data Collection Rule (DCR)
+1. Make sure the name of the stream is `Custom-{TableName}`.
+
+ For example:
+
+ ```json
+ {
+ "properties": {
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/<SubscriptionID/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<DCRName>",
+ "workspaceId": "WorkspaceID",
+ "name": "MyLogFolder"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-DataPullerE2E_CL"
+ ],
+ "destinations": [
+ "MyLogFolder"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-DataPullerE2E_CL"
+ }
+ ]
+ },
+ "location": "eastus2euap",
+ "id": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>",
+ "name": "<DCRName>",
+ "type": "Microsoft.Insights/dataCollectionRules"
+ }
+ ```
+
+1. Set the Data Collection Rule to be the default on the workspace. Use the following API command:
+
+ ```json
+ PUT https://management.azure.com/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>?api-version=2015-11-01-preview
+ {
+ "properties": {
+ "defaultDataCollectionRuleResourceId": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>"
+ },
+ "location": "eastus2euap",
+ "type": "Microsoft.OperationalInsights/workspaces"
+ }
+ ```
+
+1. Set the table as file-based custom log ingestion via DCR eligible, use the Custom log definition API.
+
+ 1. First run the following Get command:
+
+ ```json
+ GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/MyLogFolder/logsettings/customlogs/definitions/DataPullerE2E_CL?api-version=2020-08-01
+ ```
+
+ 1. Copy the response and send a PUT request:
+
+ ```JSON
+ {
+ "Name": "DataPullerE2E_CL",
+ "Description": "custom log to test Data puller E2E",
+ "Inputs": [
+ {
+ "Location": {
+ "FileSystemLocations": {
+ "WindowsFileTypeLogPaths": [
+ "C:\\MyLogFolder\\*.txt",
+ "C:\\MyLogFolder\\MyLogFolder.txt"
+ ]
+ }
+ },
+ "RecordDelimiter": {
+ "RegexDelimiter": {
+ "Pattern": \\n,
+ "MatchIndex": 0,
+ "NumberedGroup": null
+ }
+ }
+ }
+ ],
+ "Properties": [
+ {
+ "Name": "TimeGenerated",
+ "Type": "DateTime",
+ "Extraction": {
+ "DateTimeExtraction": {}
+ }
+ }
+ ],
+ "SetDataCollectionRuleBased": true
+ }
+ ```
+
+ >[!Note]
+ > * The `SetDataCollectionRuleBased` flag, from the last API command, enables the table as data puller.
+ > * Once you switch to DCREnabled mode, data will stop flowing unless you have DCR configured.
+
+ * To validate that the value is updated, run:
+ ```json
+ GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/microsoft.operationalinsights/workspaces/MyLogFolder/datasources?api-version=2020-08-01&$filter=(kind%20eq%20'CustomLog')
+ ```
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
You can also purge data from a workspace using the [purge feature](personal-data
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.** ## Tables with unique retention policies
-By default, the tables of two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. Increasing the workspace retention policy to more than 90 days also increases the retention policy of these tables. These tables are also free from data ingestion charges.
+By default, two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These tables are also free from data ingestion charges.
Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually.
You'll be charged for each day you retain data. The cost of retaining data for p
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+## Classic Application Insights resources
+Data workspace-based Application Insights resources is stored in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. Classic Application Insights resources though, have separate retention settings.
+
+The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days.
+
+To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option:
+
+![Screenshot that shows where to change the data retention period.](../app/media/pricing/pricing-005.png)
+
+A several-day grace period begins when the retention is lowered before the oldest data is removed.
+
+The retention can also be [set programatically using PowerShell](../app/powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
+ ## Next steps - [Learn more about Log Analytics workspaces and data retention and archive.](log-analytics-workspace-overview.md) - [Create a search job to retrieve archive data matching particular criteria.](search-jobs.md)
azure-monitor Design Logs Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/design-logs-deployment.md
A Log Analytics workspace provides:
* A geographic location for data storage. * Data isolation by granting different users access rights following one of our recommended design strategies.
-* Scope for configuration of settings like [pricing tier](./manage-cost-storage.md#changing-pricing-tier), [retention](./manage-cost-storage.md#change-the-data-retention-period), and [data capping](./manage-cost-storage.md#manage-your-maximum-daily-data-volume).
+* Scope for configuration of settings like [pricing tier](cost-logs.md#commitment-tiers), [retention](data-retention-archive.md), and [data capping](daily-cap.md).
Workspaces are hosted on physical clusters. By default, the system is creating and managing these clusters. Customers that ingest more than 4TB/day are expected to create their own dedicated clusters for their workspaces - it enables them better control and higher ingestion rate.
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Each workspace contains multiple tables that are organized into separate columns
## Cost There is no direct cost for creating or maintaining a workspace. You're charged for the data sent to it (data ingestion) and how long that data is stored (data retention). These costs may vary based on the data plan of each table as described in [Log data plans (preview)](#log-data-plans-preview).
-See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Manage usage and costs with Azure Monitor Logs](manage-cost-storage.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
+See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Azure Monitor best practices - Cost management](../best-practices-cost.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
## Log data plans (preview) By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-standard-columns.md
union withsource = tt *
``` ## \_BilledSize
-The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true. [Learn more](manage-cost-storage.md#data-size) about the details of how the billed size is calculated.
+The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true. See [Data size calculation](cost-logs.md#data-size-calculation) to learn more about the details of how the billed size is calculated.
### Examples
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
All operations on the cluster level require the `Microsoft.OperationalInsights/c
## Cluster pricing model-
-Log Analytics Dedicated Clusters use a Commitment Tier (formerly called capacity reservations) pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-
-The cluster Commitment Tier level is configured programmatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day.
-
-There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when configuring your cluster.
-
-1. **Cluster (default)**--Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
-
-2. **Workspaces**--The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) Details of pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-
-If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
-
-When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on cluster's Commitment Tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
-
-Complete details are billing for Log Analytics dedicated clusters are available [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters.
## Create a dedicated cluster
You must specify the following properties when you create a new dedicated cluste
- **ClusterName** - **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md). - **Location**-- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
After you create your cluster resource and it's fully provisioned, you can edit
- **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values: - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
- - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
+ - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT]
Authorization: Bearer <token>
## Next steps -- Learn about [Log Analytics dedicated cluster billing](./manage-cost-storage.md#log-analytics-dedicated-clusters)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)
- Learn about [proper design of Log Analytics workspaces](../logs/design-logs-deployment.md)
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
- Title: Manage usage and costs for Azure Monitor Logs
-description: Learn how to change the pricing plan and manage data volume and retention policy for your Log Analytics workspace in Azure Monitor.
------ Previously updated : 03/05/2022---
-
-# Manage usage and costs with Azure Monitor Logs
-
-> [!NOTE]
-> This article describes how to understand and control your costs for Azure Monitor Logs. A related article, [Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes how to view usage and estimated costs across multiple Azure monitoring features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill). All prices and costs in this article are for example purposes only.
-
-Azure Monitor Logs is designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. Although this might be a primary driver for your organization, cost-efficiency is ultimately the underlying driver. To that end, it's important to understand that the cost of a Log Analytics workspace isn't based only on the volume of data collected; it's also dependent on the selected plan, and how long you stored data generated from your connected sources.
-
-This article reviews how you can proactively monitor ingested data volume and storage growth. It also discusses how to define limits to control those associated costs.
-
-## Pricing model
-
-The default pricing for Log Analytics is a **Pay-As-You-Go** model that's based on ingested data volume and, optionally, for longer data retention. Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
-
- - The set of management solutions enabled and their configuration
- - The number and type of monitored resources
- - Type of data collected from each monitored resource
-
-<a name="commitment-tier"></a>
-### Commitment Tiers
-In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With the commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
--- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period. -- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time.
-
-Billing for the commitment tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Commitment Tier pricing.
-
-> [!NOTE]
-> Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
-
-### Data size calculation
-
-<a name="data-size"></a>
-<a name="free-data-types"></a>
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
-
-Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
-
-### Log Analytics Dedicated Clusters
-
-[Log Analytics Dedicated Clusters](logs-dedicated-clusters.md) are collections of workspaces in a single managed Azure Data Explorer cluster to support advanced scenarios, like [Customer-Managed Keys](customer-managed-keys.md). Log Analytics Dedicated Clusters use the same commitment tier pricing model as workspaces, except that a cluster must have a commitment level of at least 500 GB/day. There is no Pay-As-You-Go option for clusters. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. Learn more about [creating a Log Analytics Clusters](customer-managed-keys.md#create-cluster) and [associating workspaces to it](customer-managed-keys.md#link-workspace-to-cluster). For information about commitment tier pricing, see the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-
-The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
-
-There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when [creating a cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) or set after creation. The two modes are:
--- **Cluster**: in this case (which is the default), billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. --- **Workspaces**: the commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier, and the unused part of the commitment tier is billed to the cluster resource. If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource.-
-In cluster billing options, data retention is billed for each workspace. Cluster billing starts when the cluster is created, regardless of whether workspaces are associated with the cluster. Workspaces associated to a cluster no longer have their own pricing tier.
-
-## Estimating the costs to manage your environment
-
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. In the **Search** box, enter "Azure Monitor", and then select the resulting Azure Monitor tile. Scroll down the page to **Azure Monitor**, and then expand the **Log Analytics** section. Here you can enter the GB of data that you expect to collect. If you're already evaluating Azure Monitor Logs, you can use data statistics from your own environment. See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume). If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
-
-1. **Monitoring VMs:** with typical monitoring enabled, 1 GB to 3 GB of data month is ingested per monitored VM.
-2. **Monitoring Azure Kubernetes Service (AKS) clusters:** details on expected data volumes for monitoring a typical AKS cluster are available [here](../containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster). Follow these [best practices](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to control your AKS cluster monitoring costs.
-3. **Application monitoring:** the Azure Monitor pricing calculator includes a data volume estimator using on your application's usage and based on a statistical analysis of Application Insights data volumes. In the Application Insights section of the pricing calculator, toggle the switch next to "Estimate data volume based on application activity" to use this.
-
-## Viewing Log Analytics usage on your Azure bill
-
-The easiest way to view your billed usage for a particular Log Analytics workspace is to go to the **Overview** page of the workspace and click **View Cost** in the upper right corner of the Essentials section at the top of the page. This will launch the Cost Analysis from Azure Cost Management + Billing already scoped to this workspace. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md))
-
-Alternatively, you can start in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. here you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Microsoft Defender for Cloud and Microsoft Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
-
-<a name="export-usage"></a>
-<a name="download-usage"></a>
-
-To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md). For step-by-step instructions, review this [tutorial](../../cost-management-billing/costs/tutorial-export-acm-data.md).
-In the downloaded spreadsheet, you can see usage per Azure resource (for example, Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers), and "Azure Monitor" (used by commitment tier pricing tiers), and then adding a filter on the "Instance ID" column that is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column, and the unit for each entry is shown in the "Unit of Measure" column. For more information, see [Review your individual Azure subscription bill](../../cost-management-billing/understand/review-individual-bill.md).
-
-## Understand your usage and optimizing your pricing tier
-<a name="understand-your-usage-and-estimate-costs"></a>
-
-To learn about your usage trends and choose the most cost-effective log Analytics pricing tier, use **Log Analytics Usage and Estimated Costs**. This shows how much data is collected by each solution, how much data is being retained, and an estimate of your costs for each pricing tier based on recent data ingestion patterns.
--
-To explore your data in more detail, select on the icon in the upper-right corner of either chart on the **Usage and Estimated Costs** page. Now you can work with this query to explore more details of your usage.
--
-From the **Usage and Estimated Costs** page, you can review your data volume for the month. This includes all the billable data received and retained in your Log Analytics workspace.
-
-Log Analytics charges are added to your Azure bill. You can see details of your Azure bill under the **Billing** section of the Azure portal or in the [Azure Billing Portal](https://account.windowsazure.com/Subscriptions).
-
-## Changing pricing tier
-
-To change the Log Analytics pricing tier of your workspace:
-
-1. In the Azure portal, open **Usage and estimated costs** from your workspace; you'll see a list of each of the pricing tiers available to this workspace.
-
-2. Review the estimated costs for each pricing tier. This estimate is based on the last 31 days of usage, so this cost estimate relies on the last 31 days being representative of your typical usage. In the example below, you can see how, based on the data patterns from the last 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day commitment tier (#2).
-
-
-3. After reviewing the estimated costs based on the last 31 days of usage, if you decide to change the pricing tier, select **Select**.
-
-### Changing pricing tier via ARM
-
-You can also [set the pricing tier via Azure Resource Manager](./resource-manager-workspace.md) using the `sku` object to set the pricing tier, and the `capacityReservationLevel` parameter if the pricing tier is `capacityresrvation`. (Learn more about [setting workspace properties via ARM](/azure/templates/microsoft.operationalinsights/2020-08-01/workspaces?tabs=json#workspacesku-object).) Here is a sample Azure Resource Manager template to set your workspace to a 300 GB/day commitment tier (in Resource Manager, it's called `capacityreservation`).
-
-```
-{
- "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "name": "YourWorkspaceName",
- "type": "Microsoft.OperationalInsights/workspaces",
- "apiVersion": "2020-08-01",
- "location": "yourWorkspaceRegion",
- "properties": {
- "sku": {
- "name": "capacityreservation",
- "capacityReservationLevel": 300
- }
- }
- }
- ]
-}
-```
-
-To use this template in PowerShell, after [installing the Azure Az PowerShell module](/powershell/azure/install-az-ps), sign in to Azure using `Connect-AzAccount`, select the subscription containing your workspace using `Select-AzSubscription -SubscriptionId YourSubscriptionId`, and apply the template (saved in a file named template.json):
-
-```
-New-AzResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "template.json"
-```
-
-To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the SKU), omit the `capacityReservationLevel` property. Learn more about [creating ARM templates](../../azure-resource-manager/templates/template-tutorial-create-first-template.md), [adding a resource to your template](../../azure-resource-manager/templates/template-tutorial-add-resource.md), and [applying templates](../resource-manager-samples.md).
-
-### Tracking pricing tier changes
-
-Changes to a workspace's pricing pier are recorded in the [Activity Log](../essentials/activity-log.md) with an event with the Operation named "Create Workspace". The event's **Change history** tab will show the old and new pricing tiers in the `properties.sku.name` row. Click the "Activity Log" option from your workspace to see events scoped to a particular workspace. To monitor changes the pricing tier, you can create an alert for the "Create Workspace" operation.
-
-## Legacy pricing tiers
-
-Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days. Creating new workspaces in (or moving existing workspaces into) the Free Trial pricing tier is possible until July 1, 2022.
-
-Usage on the Standalone pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
-
-The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Usage is reported on three meters:
--- **Node**: this is usage for the number of monitored nodes (VMs) in units of node months.-- **Data Overage per Node**: this is the number of GB of data ingested in excess of the aggregated data allocation.-- **Data Included per Node**: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Microsoft Defender for Cloud.-
-> [!TIP]
-> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluating-the-legacy-per-node-pricing-tier) to easily get a recommendation.
-
-Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it can't be moved back. Data ingestion meters on your Azure bill for these legacy tiers are called "Data analyzed."
-
-### Legacy pricing tiers and Microsoft Defender for Cloud
-
-There are also some behaviors between the use of legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml).
--- If the workspace is in the legacy Standard or Premium tier, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion, not per node.-- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/). -- In other pricing tiers (including commitment tiers), if Microsoft Defender for Cloud was enabled before June 19, 2017, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion. Otherwise, Microsoft Defender for Cloud is billed using the current Microsoft Defender for Cloud node-based pricing model.-
-More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
-
-None of the legacy pricing tiers have regional-based pricing.
-
-> [!NOTE]
-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-
-## Log Analytics and Microsoft Defender for Cloud
-<a name="ASC"></a>
-
-[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Servers [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-
-To view the daily Defender for Servers data allocations for a workspace, you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill), open the usage spreadsheet and filter the meter category to "Insight and Analytics". You'll then see usage with the meter name "Data Included per Node" which has a zero price per GB. The consumed quantity column will show the number of GBs of Defender for Cloud data allocation for the day. (If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.)
-
-## Change the data retention period
-
-The following steps describe how to configure how long log data is kept by in your workspace. Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they're using the legacy Free Trial pricing tier. Retention for individual data types can be set as low as 4 days. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention. To retain data longer than 730 days, consider using [Log Analytics workspace data export](logs-data-export.md).
-
-### Workspace level default retention
-
-To set the default retention for your workspace:
-
-1. In the Azure portal, from your workspace, select **Usage and estimated costs** in the left pane.
-2. On the **Usage and estimated costs** page, select **Data Retention** at the top of the page.
-3. On the pane, move the slider to increase or decrease the number of days, and then select **OK**. If you're on the *free* tier, you can't modify the data retention period; you need to upgrade to the paid tier to control this setting.
--
-When the retention is lowered, there's a grace period of several days before the data older than the new retention setting is removed.
-
-The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. If another setting is required, that can be configured using [Azure Resource Manager](./resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the grace period). This might be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
-
-Workspaces with 30 days retention might actually retain data for 31 days. If it's imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
-
-By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These data types are also free from data ingestion charges.
-
-Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
-
-> [!TIP]
-> The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. **To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types.** Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
-
-### Retention by data type
-
-It's also possible to specify different retention settings for individual data types from 4 to 730 days (except for workspaces in the legacy Free Trial pricing tier) that override the workspace-level default retention. Each data type is a sub-resource of the workspace. For example, the SecurityEvent table can be addressed in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
-
-```
-/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
-```
-
-Note that the data type (table) is case-sensitive. To get the current per-data-type retention settings of a particular data type (in this example SecurityEvent), use:
-
-```JSON
- GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
-```
-
-> [!NOTE]
-> Retention is only returned for a data type if the retention is explicitly set for it. Data types that don't have retention explicitly set (and thus inherit the workspace retention) don't return anything from this call.
-
-To get the current per-data-type retention settings for all data types in your workspace that have had their per-data-type retention set, just omit the specific data type, for example:
-
-```JSON
- GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2017-04-26-preview
-```
-
-To set the retention of a particular data type (in this example SecurityEvent) to 730 days, use:
-
-```JSON
- PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
- {
- "properties":
- {
- "retentionInDays": 730
- }
- }
-```
-
-Valid values for `retentionInDays` are from 4 through 730.
-
-A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
-
-```
-armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"
-```
-
-> [!TIP]
-> Setting retention on individual data types can be used to reduce your costs for data retention. For data collected starting in October 2019 (when this feature was released), reducing the retention for some data types can reduce your retention cost over time. For data collected earlier, setting a lower retention for an individual type won't affect your retention costs.
-
-## Manage your maximum daily data volume
-
-You can configure a daily cap and limit the daily ingestion for your workspace, but use care because your goal shouldn't be to hit the daily limit. Otherwise, you lose data for the remainder of the day, which can impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. As a result, your ability to observe and receive alerts when the health conditions of resources supporting IT services are impacted. The daily cap is intended to be used as a way to manage an **unexpected increase** in data volume from your managed resources and stay within your limit, or when you want to limit unplanned charges for your workspace. It's not appropriate to set a daily cap so that it's met each day on a workspace.
-
-Each workspace has its daily cap applied on a different hour of the day. The reset hour is shown in the **Daily Cap** page (see below). This reset hour can't be configured.
-
-Soon after the daily limit is reached, the collection of billable data types stops for the rest of the day. Latency inherent in applying the daily cap means that the cap isn't applied at precisely the specified daily cap level. A warning banner appears across the top of the page for the selected Log Analytics workspace, and an operation event is sent to the *Operation* table under the **LogManagement** category. Data collection resumes after the reset time defined under *Daily limit will be set at*. We recommend defining an alert rule that's based on this operation event, configured to notify when the daily data limit is reached. For more information, see [Alert when daily cap is reached](#alert-when-daily-cap-is-reached) section.
-
-> [!NOTE]
-> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. For a query that is helpful in studying the daily cap behavior, see the [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) section in this article.
-
-> [!WARNING]
-> For workspaces with Microsoft Defender for Cloud, the daily cap doesn't stop the collection of data types **WindowsEvent**, **SecurityAlert**, **SecurityBaseline**, **SecurityBaselineSummary**, **SecurityDetection**, **SecurityEvent**, **WindowsFirewall**, **MaliciousIPCommunication**, **LinuxAuditLog**, **SysmonEvent**, **ProtectionStatus**, **Update**, and **UpdateSummary**, except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017.
-
-### Identify what daily data limit to define
-
-To understand the data ingestion trend and the daily volume cap to define, review [Log Analytics Usage and estimated costs](../usage-estimated-costs.md). Consider it with care, because you can't monitor your resources after the limit is reached.
-
-### Set the daily cap
-
-The following steps describe how to configure a limit to manage the volume of data that Log Analytics workspace will ingest per day.
-
-1. From your workspace, select **Usage and estimated costs** in the left pane.
-2. On the **Usage and estimated costs** page for the selected workspace, select **Data Cap** at the top of the page.
-3. By default, **Daily cap** is set to **OFF**. To enable it, select **ON**, and then set the data volume limit in GB/day.
--
-You can use Azure Resource Manager to configure the daily cap. To configure it, set the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
-
-You can track changes made to the daily cap using this query:
-
-```kusto
-_LogOperation | where Operation == "Workspace Configuration" | where Detail contains "Daily quota"
-```
-
-Learn more about the [_LogOperation](./monitor-workspace.md) function.
-
-### View the effect of the daily cap
-
-To view the effect of the daily cap, it's important to account for the security data types that aren't included in the daily cap, and the reset hour for your workspace. The daily cap reset hour is visible on the **Daily Cap** page. The following query can be used to track the data volumes that are subject to the daily cap between daily cap resets. In this example, the workspace's reset hour is 14:00. You'll need to update this for your workspace.
-
-```kusto
-let DailyCapResetHour=14;
-Usage
-| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| where TimeGenerated > ago(32d)
-| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
-| where StartTime > startofday(ago(31d))
-| where IsBillable
-| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
-| render areachart
-```
-Add `Update` and `UpdateSummary` data types to the `where Datatype` line when the Update Management solution is not running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).)
-
-### Alert when daily cap is reached
-
-Azure portal has a visual cue when your data limit threshold is met, but this behavior doesn't necessarily align to how you manage operational issues that require immediate attention. To receive an alert notification, you can create a new alert rule in Azure Monitor. To learn more, see [how to create, view, and manage alerts](../alerts/alerts-metric.md).
-
-To get you started, here are the recommended settings for the alert querying the `Operation` table using the `_LogOperation` function ([learn more](./monitor-workspace.md)).
--- Target: Select your Log Analytics resource-- Criteria:
- - Signal name: Custom log search
- - Search query: `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"`
- - Based on: Number of results
- - Condition: Greater than
- - Threshold: 0
- - Period: 5 (minutes)
- - Frequency: 5 (minutes)
-- Alert rule name: Daily data limit reached-- Severity: Warning (Sev 1)-
-After an alert is defined and the limit is reached, an alert is triggered and performs the response defined in the Action Group. It can notify your team in the following ways:
--- Email and text messages-- Automated actions using webhooks-- Azure Automation runbooks-- [Integrated with an external ITSM solution](../alerts/itsmc-definition.md#create-itsm-work-items-from-azure-alerts). -
-## Investigate your Log Analytics usage
-<a name="troubleshooting-why-usage-is-higher-than-expected"></a>
-
-Higher usage is caused by one, or both, of the following:
-- More nodes than expected sending data to Log Analytics workspace. For information, see the [Understanding nodes sending data](#understanding-nodes-sending-data) section of this article.-- More data than expected being sent to Log Analytics workspace (perhaps due to starting to use a new solution or a configuration change to an existing solution). For information, see the [Understanding ingested data volume](#understanding-ingested-data-volume) section of this article.-
-If you observe high data ingestion reported using the `Usage` records (see the [Data volume by solution](#data-volume-by-solution) section), but you don't observe the same results summing `_BilledSize` directly on the [data type](#data-volume-for-specific-events), it's possible that you have significant late-arriving data. For information about how to diagnose this, see the [Late arriving data](#late-arriving-data) section of this article.
-
-### Log Analytics Workspace Insights
-
-Start understanding your data volumes in the **Usage** tab of the [Log Analytics Workspace Insights workbook](log-analytics-workspace-insights-overview.md). On the **Usage Dashboard**, you can easily see:
-- Which data tables are ingesting the most data volume in the main table, -- What are the top resources contributing data, and -- What is the trend of data ingestion.-
-You can pivot to the **Additional Queries** to easily execution more queries useful to understanding your data patterns.
-
-Learn more about the [capabilities of the Usage tab](log-analytics-workspace-insights-overview.md#usage-tab).
-
-While this workbook can answer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
-
-## Understanding ingested data volume
-
-On the **Usage and Estimated Costs** page, the *Data ingestion per solution* chart shows the total volume of data sent and how much is being sent by each solution. You can determine trends like whether the overall data usage (or usage by a particular solution) is growing, remaining steady, or decreasing.
-
-### Data volume for specific events
-
-To look at the size of ingested data for a particular set of events, you can query the specific table (in this example `Event`) and then restrict the query to the events of interest (in this example event ID 5145 or 5156):
-
-```kusto
-Event
-| where TimeGenerated > startofday(ago(31d)) and TimeGenerated < startofday(now())
-| where EventID == 5145 or EventID == 5156
-| where _IsBillable == true
-| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d)
-```
-
-Note that the clause `where _IsBillable = true` filters out data types from certain solutions for which there is no ingestion charge. [Learn more](./log-standard-columns.md#_isbillable) about `_IsBillable`.
-
-### Data volume by solution
-
-The query used to view the billable data volume by solution over the last month (excluding the last partial day) can be built using the [Usage](/azure/azure-monitor/reference/tables/usage) data type as:
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
-| render columnchart
-```
-
-The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When using the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
-
-### Data volume by type
-
-You can drill in further to see data trends for by data type:
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
-| render columnchart
-```
-
-Or to see a table by solution and type for the last month,
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000 by Solution, DataType
-| sort by Solution asc, DataType asc
-```
-
-### Data volume by computer
-
-The **Usage** data type doesn't include information at the computer level. To see the **size** of ingested billable data per computer, use the **_BilledSize** [property](./log-standard-columns.md#_billedsize), which provides the size in bytes:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize BillableDataBytes = sum(_BilledSize) by computerName
-| sort by BillableDataBytes desc nulls last
-```
-
-The **_IsBillable** [property](./log-standard-columns.md#_isbillable) specifies whether the ingested data will incur charges. The **Usage** type is omitted because this is only for analytics of data trends.
-
-To see the **count** of billable events ingested per computer, use
-
-```kusto
-find where TimeGenerated > ago(24h) project _IsBillable, Computer
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize eventCount = count() by computerName
-| sort by eventCount desc nulls last
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, query on the **Usage** data type.
-
-### Data volume by Azure resource, resource group, or subscription
-
-For data from nodes hosted in Azure, you can get the **size** of ingested data __per computer__, use the [_ResourceId property](./log-standard-columns.md#_resourceid), which provides the full path to the resource:
-
-```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | sort by BillableDataBytes nulls last
-```
-
-For data from nodes hosted in Azure, you can get the **size** of ingested data __per Azure subscription__ by using the **_SubscriptionId** property as:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId | sort by BillableDataBytes nulls last
-```
-
-To get data volume by resource group, you can parse **_ResourceId**:
-
-```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
-| extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
-| summarize BillableDataBytes = sum(BillableDataBytes) by resourceGroup | sort by BillableDataBytes nulls last
-```
-
-If needed, you can also parse the **_ResourceId** more fully:
-
-```Kusto
-| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
- resourceGroup "/providers/" provider "/" resourceType "/" resourceName
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resouce group, or resource name, query on the **Usage** data type.
-
-> [!WARNING]
-> Some of the fields of the **Usage** data type, while still in the schema, have been deprecated and their values are no longer populated.
-> These are **Computer**, as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**).
-
-## Tips for reducing data volume
-
-This table lists some suggestions for reducing the volume of logs collected.
-
-| Source of high data volume | How to reduce data volume |
-| -- | - |
-| Data Collection Rules | The [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. |
-| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
-| Microsoft Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](../../sentinel/azure-sentinel-billing.md) about Sentinel costs and billing. |
-| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier). <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for: <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)). <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10)). <br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10)). <br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10)). <br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10)). <br> - audit removable storage. |
-| Performance counters | Change the [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection. <br> - Reduce the number of performance counters. |
-| Event logs | Change the [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected. <br> - Collect only required event levels. For example, do not collect *Information* level events. |
-| Syslog | Change the [syslog configuration](../agents/data-sources-syslog.md) to: <br> - Reduce the number of facilities collected. <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events. |
-| AzureDiagnostics | Change the [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources that send logs to Log Analytics. <br> - Collect only required logs. |
-| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
-| Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume). |
-| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
-## Create an alert when data collection is high
-
-This section describes how to create an alert when the data volume in the last 24 hours exceeded a specified amount, using Azure Monitor [Log Alerts](../alerts/alerts-unified-log.md).
-
-To alert if the billable data volume ingested in the last 24 hours was greater than 50 GB:
--- **Define alert condition** specify your Log Analytics workspace as the resource target.-- **Alert criteria** specify the following:
- - **Signal Name** select **Custom log search**
- - **Search query** to `Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50`.
- - **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
- - **Time period** of *1440* minutes and **Alert frequency** to every *1440* minutes to run once a day.
-- **Define alert details** specify the following:
- - **Name** to *Billable data volume greater than 50 GB in 24 hours*
- - **Severity** to *Warning*
-
-To be notified when the log alert matches criteria, specify an existing or create a new [action group](../alerts/action-groups.md).
-
-When you receive an alert, use the steps in the above sections about how to troubleshoot why usage is higher than expected.
-
-## Querying for common data types
-
-To dig deeper into the source of data for a particular data type, here are some useful example queries:
-
-+ **Workspace-based Application Insights** resources
- - Learn more at [Manage usage and costs for Application Insights](../app/pricing.md#data-volume-for-workspace-based-application-insights-resources)
-+ **Security** solution
- - `SecurityEvent | summarize AggregatedValue = count() by EventID`
-+ **Log Management** solution
- - `Usage | where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | summarize AggregatedValue = count() by DataType`
-+ **Perf** data type
- - `Perf | summarize AggregatedValue = count() by CounterPath`
- - `Perf | summarize AggregatedValue = count() by CounterName`
-+ **Event** data type
- - `Event | summarize AggregatedValue = count() by EventID`
- - `Event | summarize AggregatedValue = count() by EventLog, EventLevelName`
-+ **Syslog** data type
- - `Syslog | summarize AggregatedValue = count() by Facility, SeverityLevel`
- - `Syslog | summarize AggregatedValue = count() by ProcessName`
-+ **AzureDiagnostics** data type
- - `AzureDiagnostics | summarize AggregatedValue = count() by ResourceProvider, ResourceId`
-
-
-## Understanding nodes sending data
-
-To understand the number of nodes that are reporting heartbeats from the agent each day in the last month, use this query:
-
-```kusto
-Heartbeat
-| where TimeGenerated > startofday(ago(31d))
-| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
-| render timechart
-```
-The get a count of nodes sending data in the last 24 hours, use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize nodes = dcount(computerName)
-```
-
-To get a list of nodes sending any data (and the amount of data sent by each), use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
-
-### Nodes billed by the legacy Per Node pricing tier
-
-The [legacy Per Node pricing tier](#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending **billed data types** (some data types are free). To do this, use the [_IsBillable property](./log-standard-columns.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the count of computers with billed data per hour:
-
-```kusto
-find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
-| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| where _IsBillable == true
-| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
-| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
-| sort by day asc
-```
-
-The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query.
-If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the where clause in the above query. Finally, there's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
-
-### Getting Security and Automation node counts
-
-To see the number of distinct Security nodes, you can use the query:
-
-```kusto
-union
-(
- Heartbeat
- | where (Solutions has 'security' or Solutions has 'antimalware' or Solutions has 'securitycenter')
- | project Computer
-),
-(
- ProtectionStatus
- | where Computer !in~
- (
- (
- Heartbeat
- | project Computer
- )
- )
- | project Computer
-)
-| distinct Computer
-| project lowComputer = tolower(Computer)
-| distinct lowComputer
-| count
-```
-
-To see the number of distinct Automation nodes, use the query:
-
-```kusto
- ConfigurationData
- | where (ConfigDataType == "WindowsServices" or ConfigDataType == "Software" or ConfigDataType =="Daemons")
- | extend lowComputer = tolower(Computer) | summarize by lowComputer
- | join (
- Heartbeat
- | where SCAgentChannel == "Direct"
- | extend lowComputer = tolower(Computer) | summarize by lowComputer, ComputerEnvironment
- ) on lowComputer
- | summarize count() by ComputerEnvironment | sort by ComputerEnvironment asc
-```
-
-## Evaluating the legacy Per Node pricing tier
-
-The decision of whether workspaces with access to the legacy **Per Node** pricing tier are better off in that tier or in a current **Pay-As-You-Go** or **Commitment Tier** is often difficult for customers to assess. This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier.
-
-To facilitate this assessment, the following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days, and for each day, it evaluates which pricing tier would have been optimal. To use the query, you need to specify:
--- Whether the workspace is using Microsoft Defender for Cloud by setting **workspaceHasSecurityCenter** to **true** or **false**. -- Update the prices if you have specific discounts.-- Specify the number of days to look back and analyze by setting **daysToEvaluate**. This is useful if the query is taking too long trying to look at seven days of data.-
-Here is the pricing tier recommendation query:
-
-```kusto
-// Set these parameters before running query.
-// For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
-// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
-let daysToEvaluate = 7; // Enter number of previous days to analyze (reduce if the query is taking too long)
-let workspaceHasSecurityCenter = false; // Specify if the workspace has Defender for Cloud (formerly known as Azure Security Center)
-let PerNodePrice = 15.; // Montly price per monitored node
-let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier
-let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/)
-let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier
-let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier
-let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier
-let CommitmentTier400Price = 704.; // Enter your price for the 400 GB/day commitment tier
-let CommitmentTier500Price = 865.; // Enter your price for the 500 GB/day commitment tier
-let CommitmentTier1000Price = 1700.; // Enter your price for the 1000 GB/day commitment tier
-let CommitmentTier2000Price = 3320.; // Enter your price for the 2000 GB/day commitment tier
-let CommitmentTier5000Price = 8050.; // Enter your price for the 5000 GB/day commitment tier
-//
-let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]);
-let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
-let EndDate = startofday(now());
-union *
-| where TimeGenerated >= StartDate and TimeGenerated < EndDate
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize nodesPerHour = dcount(computerName) by bin(TimeGenerated, 1h)
-| summarize nodesPerDay = sum(nodesPerHour)/24. by day=bin(TimeGenerated, 1d)
-| join kind=leftouter (
- Heartbeat
- | where TimeGenerated >= StartDate and TimeGenerated < EndDate
- | where Computer != ""
- | summarize ASCnodesPerHour = dcount(Computer) by bin(TimeGenerated, 1h)
- | extend ASCnodesPerHour = iff(workspaceHasSecurityCenter, ASCnodesPerHour, 0)
- | summarize ASCnodesPerDay = sum(ASCnodesPerHour)/24. by day=bin(TimeGenerated, 1d)
-) on day
-| join (
- Usage
- | where TimeGenerated >= StartDate and TimeGenerated < EndDate
- | where IsBillable == true
- | extend NonSecurityData = iff(DataType !in (SecurityDataTypes), Quantity, 0.)
- | extend SecurityData = iff(DataType in (SecurityDataTypes), Quantity, 0.)
- | summarize DataGB=sum(Quantity)/1000., NonSecurityDataGB=sum(NonSecurityData)/1000., SecurityDataGB=sum(SecurityData)/1000. by day=bin(StartTime, 1d)
-) on day
-| extend AvgGbPerNode = NonSecurityDataGB / nodesPerDay
-| extend OverageGB = iff(workspaceHasSecurityCenter,
- max_of(DataGB - 0.5*nodesPerDay - 0.5*ASCnodesPerDay, 0.),
- max_of(DataGB - 0.5*nodesPerDay, 0.))
-| extend PerNodeDailyCost = nodesPerDay * PerNodePrice / 31. + OverageGB * PerNodeOveragePrice
-| extend billableGB = iff(workspaceHasSecurityCenter,
- (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)), DataGB )
-| extend PerGBDailyCost = billableGB * PerGBPrice
-| extend CommitmentTier100DailyCost = CommitmentTier100Price + max_of(billableGB - 100, 0.)* CommitmentTier100Price/100.
-| extend CommitmentTier200DailyCost = CommitmentTier200Price + max_of(billableGB - 200, 0.)* CommitmentTier200Price/200.
-| extend CommitmentTier300DailyCost = CommitmentTier300Price + max_of(billableGB - 300, 0.)* CommitmentTier300Price/300.
-| extend CommitmentTier400DailyCost = CommitmentTier400Price + max_of(billableGB - 400, 0.)* CommitmentTier400Price/400.
-| extend CommitmentTier500DailyCost = CommitmentTier500Price + max_of(billableGB - 500, 0.)* CommitmentTier500Price/500.
-| extend CommitmentTier1000DailyCost = CommitmentTier1000Price + max_of(billableGB - 1000, 0.)* CommitmentTier1000Price/1000.
-| extend CommitmentTier2000DailyCost = CommitmentTier2000Price + max_of(billableGB - 2000, 0.)* CommitmentTier2000Price/2000.
-| extend CommitmentTier5000DailyCost = CommitmentTier5000Price + max_of(billableGB - 5000, 0.)* CommitmentTier5000Price/5000.
-| extend MinCost = min_of(
- PerNodeDailyCost,PerGBDailyCost,CommitmentTier100DailyCost,CommitmentTier200DailyCost,
- CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost)
-| extend Recommendation = case(
- MinCost == PerNodeDailyCost, "Per node tier",
- MinCost == PerGBDailyCost, "Pay-as-you-go tier",
- MinCost == CommitmentTier100DailyCost, "Commitment tier (100 GB/day)",
- MinCost == CommitmentTier200DailyCost, "Commitment tier (200 GB/day)",
- MinCost == CommitmentTier300DailyCost, "Commitment tier (300 GB/day)",
- MinCost == CommitmentTier400DailyCost, "Commitment tier (400 GB/day)",
- MinCost == CommitmentTier500DailyCost, "Commitment tier (500 GB/day)",
- MinCost == CommitmentTier1000DailyCost, "Commitment tier (1000 GB/day)",
- MinCost == CommitmentTier2000DailyCost, "Commitment tier (2000 GB/day)",
- MinCost == CommitmentTier5000DailyCost, "Commitment tier (5000 GB/day)",
- "Error"
-)
-| project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost,
- CommitmentTier100DailyCost, CommitmentTier200DailyCost, CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost, Recommendation
-| sort by day asc
-//| project day, Recommendation // Comment this line to see details
-| sort by day asc
-```
-
-This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases.
-
-> [!NOTE]
-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-
-<a name="allocations"></a>
-
-## Viewing data allocation benefits
-
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill). Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
-
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
-
-## Late-arriving data
-
-Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
-
-To diagnose late-arriving data issues, use the **_TimeReceived** column ([learn more](./log-standard-columns.md#_timereceived)) in addition to the **TimeGenerated** column. **_TimeReceived** is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud. For example, when using the **Usage** records, you have observed high ingested data volumes of **W3CIISLog** data on May 2, 2021, here is a query that identifies the timestamps on this ingested data:
-
-```Kusto
-W3CIISLog
-| where TimeGenerated > datetime(1970-01-01)
-| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
-| where _IsBillable == true
-| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
-| sort by TimeGenerated asc
-```
-
-The `where TimeGenerated > datetime(1970-01-01)` statement is present only to provide the clue to the Log Analytics user interface to look over all data.
-
-## Data transfer charges using Log Analytics
-
-Sending data to Log Analytics might incur data bandwidth charges. However, that's limited to Virtual Machines where a Log Analytics agent is installed and doesn't apply when using Diagnostics settings or with other connectors that are built in to Microsoft Sentinel. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is very small compared to the costs for Log Analytics data ingestion. So, controlling costs for Log Analytics needs to focus on your [ingested data volume](#understanding-ingested-data-volume).
-
-## Troubleshooting why Log Analytics is no longer collecting data
-
-If you're on the legacy Free pricing tier and have sent more than 500 MB of data in a day, data collection stops for the rest of the day. Reaching the daily limit is a common reason that Log Analytics stops collecting data, or data appears to be missing. Log Analytics creates an **Operation** type event when data collection starts and stops. Run the following query in search to check whether you're reaching the daily limit and missing data:
-
-```kusto
-Operation | where OperationCategory == 'Data Collection Status'
-```
-
-When data collection stops, the **OperationStatus** is **Warning**. When data collection starts, the **OperationStatus** is **Succeeded**. The following table lists reasons that data collection stops and a suggested action to resume data collection.
-
-|Reason collection stops| Solution|
-|--||
-|Daily cap of your workspace was reached|Wait for collection to automatically restart, or increase the daily data volume limit described in manage the maximum daily data volume. The daily cap reset time is shows on the **Daily Cap** page. |
-| Your workspace has hit the [Data Ingestion Volume Rate](../service-limits.md#log-analytics-workspaces) | The default ingestion volume rate limit for data sent from Azure resources using diagnostic settings is approximately 6 GB/min per workspace. This is an approximate value because the actual size can vary between data types, depending on the log length and its compression ratio. This limit doesn't apply to data that's sent from agents or the Data Collector API. If you send data at a higher rate to a single workspace, some data is dropped, and an event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be exceeded. If your ingestion volume continues to exceed the rate limit or you are expecting to reach it sometime soon, you can request an increase to your workspace by sending an email to LAIngestionRate@microsoft.com or by opening a support request. The event to look for that indicates a data ingestion rate limit can be found by the query `Operation | where OperationCategory == "Ingestion" | where Detail startswith "The rate of data crossed the threshold"`. |
-|Daily limit of legacy Free pricing tier reached |Wait until the following day for collection to automatically restart, or change to a paid pricing tier.|
-|Azure subscription is in a suspended state due to:<br> Free trial ended<br> Azure pass expired<br> Monthly spending limit reached (such as on an MSDN or Visual Studio subscription)|Convert to a paid subscription<br> Remove limit, or wait until limit resets|
-
-To be notified when data collection stops, use the steps described in the [Alert when daily cap is reached](#alert-when-daily-cap-is-reached) section. To configure an e-mail, webhook, or runbook action for the alert rule, use the steps described in [create an action group](../alerts/action-groups.md).
-
-## Limits summary
-
-There are additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. These are documented at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
--
-## Next steps
--- See [Log searches in Azure Monitor Logs](../logs/log-query-overview.md) to learn how to use the search language. You can use search queries to perform additional analysis on the usage data.-- Use the steps described in [create a new log alert](../alerts/alerts-metric.md) to be notified when a search criteria is met.-- Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers.-- To configure an effective event collection policy, review [Microsoft Defender for Cloud filtering policy](../../security-center/security-center-enable-data-collection.md).-- Change [performance counter configuration](../agents/data-sources-performance-counters.md).-- To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md
Note, after reaching the set limit, your data collection will automatically stop
Recommended Actions: * Check _LogOperation table for collection stopped and collection resumed events.</br> `_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Data collection"`
-* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached.
+* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached.
* Data collected after the daily collection limit is reached will be lost, use ΓÇÿworkspace insightsΓÇÖ blade to review usage rates from each source.
-Or, you can decide to ([Manage your maximum daily data volume](./manage-cost-storage.md#manage-your-maximum-daily-data-volume) \ [change the pricing tier](./manage-cost-storage.md#changing-pricing-tier) to one that will suite your collection rates pattern).
-* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection resumed" Operation event.
+Or, you can decide to ([Manage your maximum daily data volume](daily-cap.md) \ [change the pricing tier](cost-logs.md#commitment-tiers) to one that will suite your collection rates pattern).
+* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection resumed" Operation event.
#### Operation: Ingestion rate "The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped."
Recommended Actions:
* Check _LogOperation table for ingestion rate event `_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Ingestion rate"` Note: Operation table in the workspace every 6 hours while the threshold continues to be exceeded.
-* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached.
+* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached.
* Data collected while ingestion rate reached 100% will be dropped and lost. 'workspace insights' blade to review your usage patterns and try to reduce them.</br> For further information: </br> [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) </br>
-[Manage usage and costs for Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-is-reached)
+[Analyze usage in Log Analytics workspace](analyze-usage.md)
#### Operation: Maximum table column count
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)-- [Manage usage and costs for Application Insights](app/pricing.md)
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Title: Monitor usage and estimated costs in Azure Monitor
-description: Get an overview of the process of using the page for Azure Monitor usage and estimated costs.
-
+ Title: Azure Monitor cost and usage
+description: Overview of how Azure Monitor is billed and how to estimate and analyze billable usage.
- Previously updated : 10/28/2019- Last updated : 03/28/2022
-# Monitor usage and estimated costs in Azure Monitor
+# Azure Monitor cost and usage
+This **article** describes the different ways that Azure Monitor charges for usage, how to evaluate charges on your Azure bill, and how to estimate charges to monitor your entire environment.
-This article describes how to view usage and estimated costs across multiple Azure monitoring features.
+## Pricing model
+Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use.
+Features of Azure Monitor that are enabled by default do not incur any charge. This includes collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
-## Azure Monitor pricing model
+Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-The basic Azure Monitor billing model is a cloud-friendly, consumption-based pricing (pay-as-you-go). You pay for only what you use. Pricing details are available for [alerting, metrics, and notifications](https://azure.microsoft.com/pricing/details/monitor/); [Log Analytics](https://azure.microsoft.com/pricing/details/log-analytics/); and [Application Insights](https://azure.microsoft.com/pricing/details/application-insights/).
-In addition to the pay-as-you-go model for log data, Azure Monitor Log Analytics has Commitment Tiers. They enable you to save as much as 30 percent compared to the pay-as-you-go pricing. Commitment Tiers start at 100 gigabytes (GB) a day. Any usage above the Commitment Tier will be billed at the same price per gigabyte as the Commitment Tier. [Learn more about Commitment Tier pricing](https://azure.microsoft.com/pricing/details/monitor/).
+| Type | Description |
+|:|:|
+| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application insights resources. This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Resource Logs | [Diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
+| Custom metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of notification used in response. |
+| Multi-step web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated.
-Some customers have access to [legacy Log Analytics pricing tiers](logs/manage-cost-storage.md#legacy-pricing-tiers) and the [legacy Enterprise Application Insights pricing tier](app/pricing.md#legacy-enterprise-per-node-pricing-tier).
+## Data transfer charges
+Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
-## Azure Monitor costs
+## Estimate Azure Monitor usage and costs
+If you're new to Azure Monitor, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter *Azure Monitor*, and then select the **Azure Monitor** tile. The pricing calculator will help you estimate your likely costs based on your expected utilization.
-There are two phases for understanding costs: estimating costs when you're considering Azure Monitor as your monitoring solution, and then tracking actual costs after deployment.
+The bulk of your costs will typically be from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect since they'll vary significantly based on your configuration. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
-### Estimate the costs to manage your environment
+Following is basic guidance that you can use for common resources:
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Azure Monitor. Start by entering **Azure Monitor** in the **Search** box, and then selecting the **Azure Monitor** tile. Scroll down the page to **Azure Monitor**, and select one of the options from the **Type** dropdown list:
+- **Virtual machines.** With typical monitoring enabled, a virtual machine will generate between 1 GB to 3 GB of data per month. This is highly dependent on the configuration of your agents.
+- **Application Insights.** See the following section for different methods to estimate data from your applications.
+- **Container insights.** See [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster) for guidance on estimating data for your ASK cluster.
-- **Metrics queries and Alerts** -- **Log Analytics**-- **Application Insights**
+## Estimate application usage
+There are two methods that you can use to estimate the amount of data from an application monitored with Application Insights.
+
+### Learn from what similar applications collect
+In the Azure Monitoring Pricing calculator for Application Insights, enable **Estimate data volume based on application activity** which allows you to provide inputs about your application. The calculator will then tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
-In each of these types, the pricing calculator will help you estimate your likely costs based on your expected utilization.
+### Data collection when using sampling
+With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
-For example, with Log Analytics, you can enter the number of virtual machines (VMs) and the gigabytes of data that you expect to collect from each VM. Typically, 1 GB to 3 GB of data per month is ingested from an Azure VM. If you're already evaluating Azure Monitor Logs, you can use your data statistics from your own environment. You can determine the [number of monitored VMs](logs/manage-cost-storage.md#understanding-nodes-sending-data) and the [volume of data that your workspace is ingesting](logs/manage-cost-storage.md#understanding-ingested-data-volume).
+For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
-For Application Insights, if you enable the **Estimate data volume based on application activity** functionality, you can provide inputs about your application (requests per month and page views per month, if you'll collect client-side telemetry). Then the calculator will tell you the median and 90th percentile amount of data that similar applications collect.
-These applications span the range of Application Insights configurations. For example, some have default sampling, some have no sampling, and some have custom sampling. So you still have the control to reduce the volume of data that you ingest to far below the median level by using sampling. But this is a starting point to understand what similar customers are seeing. [Learn more about estimating costs for Application Insights](app/pricing.md#estimating-the-costs-to-manage-your-application).
+## Viewing Azure Monitor usage and charges
+There are two primary tools to view and analyze your Azure Monitor billing and estimated charges.
-### Track usage and costs
+- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool that you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.
+- [Usage and Estimated Costs](#usage-and-estimated-costs) provides a listing of monthly charges for different Azure Monitor features. This is particularly useful for Log Analytics workspaces where it helps you to select your pricing tier by showing how your cost would be different at different tiers.
-It's important to understand and track your usage after you start using Azure Monitor. A rich set of tools can help facilitate this tracking.
-#### Azure Cost Management + Billing
+## Azure Cost Management + Billing
+Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
-Azure provides useful functionality in the [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. After you open the hub, select **Cost Management** and select the [scope](../cost-management-billing/costs/understand-work-scopes.md) (the set of resources to investigate). You might need additional access to Cost Management data ([learn more](../cost-management-billing/costs/assign-access-acm-data.md)).
+>[!NOTE]
+>You might need additional access to Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
-To see the Azure Monitor costs for the last 30 days, select the **Daily Costs** tile, select **Last 30 days** under **Relative dates**, and add a filter that selects the service names:
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**:
- **Azure Monitor** - **Application Insights** - **Log Analytics** - **Insight and Analytics**
-The result is a view like the following example:
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you may want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
![Screenshot that shows Azure Cost Management with cost information.](./media/usage-estimated-costs/010.png)
-You can drill in from this accumulated cost summary to get the finer details in the **Cost by resource** view. In the current pricing tiers, Azure log data is charged on the same set of meters whether it originates from Log Analytics or Application Insights.
+>[!NOTE]
+>Alternatively, you can go to the **Overview** page of a Log Analytics workspace or Application Insights resource and click **View Cost** in the upper right corner of the **Essentials** section. This will launch the **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
+> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for Log Analytics workspace.":::
+
+### Download usage
+To gain more understanding of your usage, you can download your usage from the Azure portal and see usage per Azure resource in the downloaded spreadsheet. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) for a tutorial, including how to automatically create a daily report that you can use for regular analysis.
+
+Usage from your Log Analytics workspaces can be found by first filtering on the **Meter Category** column to show *Log Analytics*, *Insight and Analytics* (used by some of the legacy pricing tiers), and *Azure Monitor* (used by commitment tier pricing tiers). Add a filter on the *Instance ID* column for *contains workspace* or *contains cluster*. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column.
+
+### Application Insights meters
+Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category**, because there's a single log back end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column. See [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md) for more details.
+
+To separate costs from your Log Analytics or Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
-To separate costs from your Log Analytics or Application Insights usage, you can add a filter on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
+## Usage and estimated costs
+You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+### Log Analytics workspace
+To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal.
-More details about your usage are available if you [download your usage from the Azure portal](../cost-management-billing/understand/download-azure-daily-usage.md). In the downloaded Excel spreadsheet, you can see usage per Azure resource per day. You can find usage from your Application Insights resources by filtering on the **Meter Category** column to show **Application Insights** and **Log Analytics**. Then add a **contains microsoft.insights/components** filter on the **Instance ID** column.
-Most Application Insights usage is reported on meters with **Log Analytics** for **Meter Category**, because there's a single log back end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column. More details are available to help you [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
+This view includes the following:
-#### Usage and estimated costs
+A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br>
+B. Estimated monthly charges using different commitment tiers.<br>
+C. Billable data ingestion by solution from the past 31 days.
-Another option for viewing your Azure Monitor usage is the **Usage and estimated costs** page in the Monitor hub. This page shows the usage of core monitoring features such as [alerting, metrics, and notifications](https://azure.microsoft.com/pricing/details/monitor/); [Azure Log Analytics](https://azure.microsoft.com/pricing/details/log-analytics/); and [Azure Application Insights](https://azure.microsoft.com/pricing/details/application-insights/). For customers on the pricing plans available before April 2018, this page also includes Log Analytics usage purchased through the Insights and Analytics offer.
+To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
-On this page, users can view their resource usage for the past 31 days, aggregated per subscription. Drill-ins show usage trends over the 31-day period. A lot of data needs to come together for this estimate, so please be patient as the page loads.
-This example shows monitoring usage and an estimate of the resulting costs:
+### Application insights
+To learn about your usage trends for your classic Application Insights resource, select **Usage and Estimated Costs** from the **Applications** menu in the Azure portal.
-![Screenshot of the Azure portal that shows usage and estimated costs.](./media/usage-estimated-costs/001.png)
-Select the link in the **MONTHLY USAGE** column to open a chart that shows usage trends over the last 31-day period:
+This view includes the following:
-![Screenshot that shows a bar chart for included data volume per node.](./media/usage-estimated-costs/002.png)
+A. Estimated monthly charges based on usage from the past month.<br>
+B. Billable data ingestion by table from the past month.
+
+To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
++
+## Viewing data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details. Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
> [!NOTE]
-> Using **Cost Management** in the **Azure Cost Management + Billing** hub is the preferred approach to broadly understanding monitoring costs. The **Usage and estimated costs** experiences for [Log Analytics](logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs) and [Application Insights](app/pricing.md#understand-your-usage-and-estimate-costs) provide deeper insights for each of those parts of Azure Monitor.
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+ ## Operations Management Suite subscription entitlements
-Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for [Log Analytics](https://www.microsoft.com/cloud-platform/operations-management-suite) and [Application Insights](app/pricing.md). For customers to receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription:
+Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-- Log Analytics workspaces should use the Per-Node (OMS) pricing tier.-- Application Insights resources should use the Enterprise pricing tier.
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must be use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration. +
+Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+ > [!TIP] > If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier. > ## Next steps
-Get cost information for specific components of Azure Monitor:
--- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.-- [Manage usage and costs for Application Insights](app/pricing.md) describes how to analyze data usage in Application Insights.
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
The following table lists the steps that must be performed for this configuratio
Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. ## Create and prepare a Log Analytics workspace
-You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md).
+You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve.
azure-monitor Vminsights Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-overview.md
VM insights guest health allows you to view the health of virtual machines based
See [Enable VM insights guest health (preview)](vminsights-health-enable.md) for details on enabling the guest health feature and onboarding virtual machines. ## Pricing
-There is no direct cost for the guest health feature, but there is a cost for ingestion and storage of health state data in the Log Analytics workspace. All data is stored in the *HealthStateChangeEvent* table. See [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md) for details on pricing models and costs.
+There is no direct cost for the guest health feature, but there is a cost for ingestion and storage of health state data in the Log Analytics workspace. All data is stored in the *HealthStateChangeEvent* table. See [Azure Monitor Logs pricing details](../logs/cost-logs.md) for details on pricing models and costs.
## View virtual machine health The **Guest VM Health** column in the **Get Started** page gives you a quick view of the health of each virtual machine in a particular subscription or resource group. The current health of each virtual machine is displayed while icons for each group show the number of virtual machines currently in each state in that group.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
- [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) ## December, 2021
This article lists significant changes to Azure Monitor documentation.
**Updated articles** - [Troubleshooting no data - Application Insights for .NET/.NET Core](app/asp-net-troubleshoot-no-data.md)-- [Manage usage and costs for Application Insights](app/pricing.md) - [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md) - [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](app/opentelemetry-enable.md) - [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)
This article lists significant changes to Azure Monitor documentation.
- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md) - [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) ### Virtual Machines
This article lists significant changes to Azure Monitor documentation.
**Updated articles** - [Log Analytics tutorial](logs/log-analytics-tutorial.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) - [Use Azure Private Link to securely connect networks to Azure Monitor](logs/private-link-security.md) - [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md) - [Monitor health of Log Analytics workspace in Azure Monitor](logs/monitor-workspace.md)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 03/18/2022 Last updated : 04/07/2022 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
* [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html) * [Distributed training in Azure: Lane detection - Solution design](https://www.netapp.com/media/32427-tr-4896-design.pdf) * [Distributed training in Azure: Click-Through Rate Prediction ΓÇô Solution design](https://docs.netapp.com/us-en/netapp-solutions/ai/aks-anf_introduction.html)
+* [How to use Azure Machine Learning with Azure NetApp Files](https://github.com/csiebler/azureml-with-azure-netapp-files)
### Education
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/advance-notifications.md
Previously updated : 03/25/2022 Last updated : 04/04/2022 # Advance notifications for planned maintenance events (Preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
resources
| extend p = parse_json(properties) | mvexpand d = p.value | where d has 'notificationId' and d.notificationId == 'LNPN-R9Z'
- | project resource = tolower(name), status = d.status
+ | project resource = tolower(name), status = d.status, resourceGroup, location, startTimeUtc = d.startTimeUtc, endTimeUtc = d.endTimeUtc, impactType = d.impactType
) on resource
-|project resource, status
+| project resource, status, resourceGroup, location, startTimeUtc, endTimeUtc, impactType
``` For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Previously updated : 03/07/2022 Last updated : 04/04/2022 # Maintenance window
servicehealthresources
| extend impact = properties.Impact | extend impactedService = parse_json(impact[0]).ImpactedService | where impactedService =~ 'SQL Database'
-| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
-| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime = todatetime(tolong(properties.ImpactMitigationTime))
+| where eventType == 'PlannedMaintenance'
+| order by impactStartTime desc
``` To check for the maintenance events for all managed instances in your subscription, use the following sample query in Azure Resource Graph Explorer:
servicehealthresources
| extend impact = properties.Impact | extend impactedService = parse_json(impact[0]).ImpactedService | where impactedService =~ 'SQL Managed Instance'
-| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
-| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime = todatetime(tolong(properties.ImpactMitigationTime))
+| where eventType == 'PlannedMaintenance'
+| order by impactStartTime desc
``` For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
azure-video-analyzer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-support.md
Previously updated : 02/02/2022 Last updated : 04/07/2022 # Language support in Video Analyzer for Media
This section describes language support in Video Analyzer for Media.
The following insights are translated, otherwise will remain in English: - Transcript
- - OCR
- Keywords - Topics - Labels
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 04/04/2022 Last updated : 04/07/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former Video Indexer) developments, this article provides you with information about:
+* [Important notice](#upcoming-critical-changes) about planned changes
* The latest releases * Known issues * Bug fixes * Deprecated functionality
-## March 2022
+## Upcoming critical changes
+
+> [!Important]
+> This section describes a critical upcoming change for the `Upload-Video` API.
++
+### Upload-Video API
+
+In the past, the `Upload-Video` API was tolerant to calls to upload a video from a URL where an empty multipart form body was provided in the C# code, such as:
+
+```csharp
+var content = new MultipartFormDataContent();
+var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
+```
+
+In the coming weeks, our service will fail requests of this type.
+
+In order to upload a video from a URL, change your code to send null in the request body:
+
+```csharp
+var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null);
+```
+
+## March 2022 release updates
### Closed Captioning files now support including speakersΓÇÖ attributes
To enable the dark mode open the settings panel and toggle on the **Dark Mode**
:::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting":::
-## December 2020
+## December 2020
### Video Analyzer for Media deployed in the Switzerland West and Switzerland North
Multiple advancements announced at IBC 2019:
The topic inferencing model now supports deeper granularity of the IPTC taxonomy. Read full details at [Azure Media Services new AI-powered innovation](https://azure.microsoft.com/blog/azure-media-services-new-ai-powered-innovation/).
-## August 2019
+## August 2019 updates
### Video Analyzer for Media deployed in UK South
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/create-placement-policy.md
Title: Create placement policy description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. Previously updated : 12/16/2021 Last updated : 04/07/2022 #Customer intent: As an Azure service administrator, I want to control the placement of virtual machines on hosts within a cluster in my private cloud.
The assignment of hosts isn't required or permitted for this policy type.
- **VM-VM Affinity** policies instruct DRS to try to keeping the specified VMs together on the same host. It's useful for performance reasons, for example. -- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in scenarios where a problem with one host doesn't affect multiple VMs within the same policy.
+- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in availability scenarios where a problem with one host doesn't affect multiple VMs within the same policy.
### VM-Host policies
You can delete a placement policy and its corresponding DRS rule.
Use the vSphere Client to monitor the operation of a placement policy's corresponding DRS rule.
-As a holder of the cloudadmin role, you can view, but not edit, the DRS rules created by a placement policy on the cluster's Configure tab under VM/Host Rules. It lets you view additional information, such as if the DRS rules are in a conflict state.
+As a holder of the CloudAdmin role, you can view, but not edit, the DRS rules created by a placement policy on the cluster's Configure tab under VM/Host Rules. It lets you view additional information, such as if the DRS rules are in a conflict state.
Additionally, you can monitor various DRS rule operations, such as recommendations and faults, from the cluster's Monitor tab.
For most workloads this is not necessary and may cause unintended performance im
1. Navigate to Manage Placement policies and click Restrict VM movement. 1. Select the VM or VMs you want to restrict, then click Select. 1. The VM or VMS you selected appears in the VMs with restricted movement tab.
-In the vSphere client, a VM override will be created to set DRS to partially automated for that VM.
+In the vSphere Client, a VM override will be created to set DRS to partially automated for that VM.
DRS will no longer migrate the VM automatically. Manual vMotion of the VM and automatic initial placement of the VM will continue to function.
Yes, and no. While vSphere DRS implements the current set of policies, we have s
Azure VMware Solution provides a VMware private cloud in Azure. In this managed VMware infrastructure, Microsoft manages the clusters, hosts, datastores, and distributed virtual switches in the private cloud. At the same time, the tenant is responsible for managing the workloads deployed on the private cloud. As a result, the tenant administering the private cloud [does not have the same set of privileges](concepts-identity.md) as available to the VMware administrator in an on-premises deployment.
-Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in a VMware Cloud environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure VMware Solution portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, they also ensure that the rules don't impact the day-to-day infrastructure maintenance and operation activities.
+Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in an Azure VMware Solution environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure VMware Solution portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, they also ensure that the rules don't impact the day-to-day infrastructure maintenance and operation activities.
### What is the difference between the VM-Host affinity policy and Restrict VM movement?
The VM-Host **MUST** rules aren't supported because they block maintenance opera
VM-Host **SHOULD** rules are preferential rules, where vSphere DRS tries to accommodate the rules to the extent possible. Occasionally, vSphere DRS may vMotion VMs subjected to the VM-Host **SHOULD** rules to ensure that the workloads get the resources they need. It's a standard vSphere DRS behavior, and the Placement policies feature does not change the underlying vSphere DRS behavior.
-If you create conflicting rules, those conflicts may show up on the vCenter, and the newly defined rules may not take effect. It's a standard vSphere DRS behavior, the logs for which can be observed in the vCenter.
+If you create conflicting rules, those conflicts may show up on the vCenter Server, and the newly defined rules may not take effect. It's a standard vSphere DRS behavior, the logs for which can be observed in the vCenter Server.
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 06/04/2021 Last updated : 04/05/2022+++ # Troubleshoot the Microsoft Azure Recovery Services (MARS) agent
We recommend that you check the following before you start troubleshooting Micro
**Error message**: Invalid vault credentials provided. The file is either corrupted or does not have the latest credentials associated with recovery service. (ID: 34513)
-| Cause | Recommended actions |
+| Causes | Recommended actions |
| | |
-| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <ul><li> If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br/> <li> If the new installation fails, try reinstalling with the new credentials.</ul> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file.
-| **Proxy server/firewall is blocking registration** <br/>or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Take these steps:<br/> <ul><li> Work with your IT team to ensure the system has internet connectivity.<li> If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<li> If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `www.msftncsi.com` <br> `www.msftconnecttest.com` <br> \*.microsoft.com <br> \*.windowsazure.com <br> \*.microsoftonline.com <br>\*.windows.net<br><br>**IP addresses**<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/><li>If you are a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> \*.microsoft.com <br> \*.windowsazure.us <br> \*.microsoftonline.us <br> `*.windows.net` <br> \*.usgovcloudapi.net</li></ul></ul>Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support).
-| **Antivirus software is blocking registration** | If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders: <br/><ul> <li> CBengine.exe <li> CSC.exe<li> The scratch folder. Its default location is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch. <li> The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin.
+| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <br><br>- If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br> - If the new installation fails, try reinstalling with the new credentials. <br><br> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file. |
+| **Proxy server/firewall is blocking registration** <br/>Or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Follow these steps:<br/> <br><br>- Work with your IT team to ensure the system has internet connectivity.<br>- If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<br>- If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `*.microsoft.com` <br> `*.windowsazure.com` <br> `*.microsoftonline.com` <br> `*.windows.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net`<br><br><br>- If you are a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> `*.microsoft.com` <br> `*.windowsazure.us` <br> `*.microsoftonline.us` <br> `*.windows.net` <br> `*.usgovcloudapi.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net` <br><br> Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support). |
+| **Antivirus software is blocking registration** | If you've antivirus software installed on the server, add the exclusion rules to the antivirus scan for: <br><br> - Every file and folder under the *scratch* and *bin* folder locations - `<InstallPath>\Scratch\*` and `<InstallPath>\Bin\*`. <br> - cbengine.exe |
### Additional recommendations
Backup operations could fail if there isn't sufficient shadow copy storage space
### Another process or antivirus software blocking access to cache folder
-If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders:
--- The scratch folder. Its default location is `C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch`-- The bin folder at `C:\Program Files\Microsoft Azure Recovery Services Agent\Bin`-- CBengine.exe-- CSC.exe ## Common issues
backup Backup Azure Troubleshoot Slow Backup Performance Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md
We've seen several instances where other processes in the Windows system have ne
The best recommendation in this scenario is to turn off the other backup program to see whether the backup time for the Azure Backup agent changes. Usually, making sure that multiple backup jobs are not running at the same time is sufficient to prevent them from affecting each other.
-For antivirus programs, we recommend that you exclude the following files and locations:
-
-* C:\Program Files\Microsoft Azure Recovery Services Agent\bin\cbengine.exe as a process
-* C:\Program Files\Microsoft Azure Recovery Services Agent\ folders
-* Scratch location (if you're not using the standard location)
<a id="cause3"></a>
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports description: Create automated tasks to receive periodic reports via email Previously updated : 02/14/2022 Last updated : 04/06/2022
Using the **Email Report** feature available in Backup Reports, you can create a
To configure email tasks via Backup Reports, perform the following steps:
-1. Navigate to **Backup Center** > **Backup Reports** and click on the **Email Report** tab.
+1. Go to **Backup Center** > **Backup Reports** and click on the **Email Report** tab.
2. Create a task by specifying the following information: * **Task Details** - The name of the logic app to be created, and the subscription, resource group, and location in which it should be created. Note that the logic app can query data across multiple subscriptions, resource groups, and locations (as selected in the Report Filters section), but is created in the context of a single subscription, resource group and location. * **Data To Export** - The tab which you wish to export. You can either create a single task app per tab, or email all tabs using a single task, by selecting the **All Tabs** option.
To configure email tasks via Backup Reports, perform the following steps:
## Authorize connections to Azure Monitor Logs and Office 365
-The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You will need to perform a one-time authorization for these two connectors.
+The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You'll need to perform a one-time authorization for these two connectors.
To perform the authorization, follow the steps below:
-1. Navigate to **Logic Apps** in the Azure portal.
-2. Search for the name of the logic app you have created and navigate to the resource.
+1. Go to **Logic Apps** in the Azure portal.
+2. Search for the name of the logic app you've created and go to the resource.
![Logic Apps](./media/backup-azure-configure-backup-reports/logic-apps.png)
To perform the authorization, follow the steps below:
![API Connections](./media/backup-azure-configure-backup-reports/api-connections.png)
-4. You will see two connections with the format `<location>-azuremonitorlogs` and `<location>-office365` - that is, _eastus-azuremonitorlogs_ and _eastus-office365_.
-5. Navigate to each of these connections and select the **Edit API connection** menu item. In the screen that appears, select **Authorize**, and save the connection once authorization is complete.
+4. You'll see two connections with the format `<location>-azuremonitorlogs` and `<location>-office365` - that is, _eastus-azuremonitorlogs_ and _eastus-office365_.
+5. Go to each of these connections and select the **Edit API connection** menu item. In the screen that appears, select **Authorize**, and save the connection once authorization is complete.
![Authorize connection](./media/backup-azure-configure-backup-reports/authorize-connections.png)
-6. To test whether the logic app works after authorization, you can navigate back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
+6. To test whether the logic app works after authorization, you can go back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
## Contents of the email * All the charts and graphs shown in the portal are available as inline content in the email. [Learn more](configure-reports.md) about the information shown in Backup Reports. * The grids shown in the portal are available as *.csv attachments in the email. * The data shown in the email uses all the report-level filters selected by the user in the report, at the time of creating the email task.
-* Tab-level filters such as **Backup Instance Name**, **Policy Name** and so on, are not applied. The only exception to this is the **Retention Optimizations** grid in the **Optimize** tab, where the filters for **Daily**, **Weekly**, **Monthly** and **Yearly** RP retention are applied.
+* Tab-level filters such as **Backup Instance Name**, **Policy Name** and so on, aren't applied. The only exception to this is the **Retention Optimizations** grid in the **Optimize** tab, where the filters for **Daily**, **Weekly**, **Monthly** and **Yearly** RP retention are applied.
* The time range and aggregation type (for charts) are based on the userΓÇÖs time range selection in the reports. For example, if the time range selection is last 60 days (translating to weekly aggregation type), and email frequency is daily, the recipient will receive an email every day with charts spanning data taken over the last 60-day period, with data aggregated at a weekly level. ## Troubleshooting issues
If you aren't receiving emails as expected even after successful deployment of t
### Scenario 1: Receiving neither a successful email nor an error email
-* This issue could be occurring because the Outlook API connector is not authorized. To authorize the connection, follow the authorization steps provided above.
+* This issue could be occurring because the Outlook API connector isn't authorized. To authorize the connection, follow the authorization steps provided above.
-* This issue could also be occurring if you have specified an incorrect email recipient while creating the logic app. To verify that the email recipient has been specified correctly, you can navigate to the logic app in the Azure portal, open the Logic App designer and select email step to see whether the correct email IDs are being used.
+* This issue could also be occurring if you've specified an incorrect email recipient while creating the logic app. To verify that the email recipient has been specified correctly, you can go to the logic app in the Azure portal, open the Logic App designer and select email step to see whether the correct email IDs are being used.
### Scenario 2: Receiving an error email that says that the logic app failed to execute to completion To troubleshoot this issue:
-1. Navigate to the logic app in the Azure portal.
-2. At the bottom of the **Overview** screen, you will see a **Runs History** section. You can open on the latest run and view which steps in the workflow failed. Some possible causes could be:
+1. Go to the logic app in the Azure portal.
+2. At the bottom of the **Overview** screen, you'll see a **Runs History** section. You can open on the latest run and view which steps in the workflow failed. Some possible causes could be:
* **Azure Monitor Logs Connector has not been not authorized**: To fix this issue, follow the authorization steps as provided above. * **Error in the LA query**: In case you have customized the logic app with your own queries, an error in any of the LA queries might be causing the logic app to fail. You can select the relevant step and view the error which is causing the query to run incorrectly.
To ensure you're logged in to the right tenant, you can open _portal.azure.com/<
If the issues persist, contact Microsoft support.
+## Guidance for GCC High users
+
+If you're a user in Azure Government environment using an [Office365 GCC High account](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod), ensure that the email configuration is set correctly. This is because a different endpoint used for authorizing this connection for GCC High users that needs to be explicitly specified. Perform one of the following methods to verify the configuration and set up the logic app to work in GCC High.
+
+**Choose a client:**
+
+# [Azure portal](#tab/portal)
+
+To update the authentication type for the Office 365 connection via the Azure portal, follow these steps:
+
+1. Deploy the logic app task for the required tabs. See the steps in [Getting started](#getting-started).
+
+ Learn about [how to authorize the Azure Monitor Logs connection](#authorize-connections-to-azure-monitor-logs-and-office-365).
+
+1. Once deployed, go to the logic app in the Azure portal and click **Logic app designer** from the menu.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/logic-app-designer-inline.png" alt-text="Screenshot showing to click Logic app designer." lightbox="./media/backup-azure-configure-backup-reports/logic-app-designer-expanded.png":::
+
+1. Locate the places where the Office 365 action is used.
+
+ You'll find two Office 365 actions used, both at the bottom of the flow.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/office-365-change-connection.png" alt-text="Screenshot showing Office 365 change connection.":::
+
+1. Click **Change connection** and click the *information icon*.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/email-information-icon.png" alt-text="Screenshot showing to click information icon.":::
+
+1. A popup opens where you can select the authentication type for GCC High.
+
+Once you select the correct authentication type in all the places where the Office 365 connection is used, the connection should work as expected.
+
+# [Azure Resource Manager (ARM) template](#tab/arm)
+
+You can also directly update the ARM template, which is used for deploying the logic app, to ensure that the GCC High endpoint is used for authorization. Follow these steps:
+
+1. Go to the **Email Report** tab, provide the required inputs, and then click **Submit**.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/view-template-inline.png" alt-text="View email template." lightbox="./media/backup-azure-configure-backup-reports/view-template-expanded.png":::
+
+1. Click **View template**.
+
+ This opens up the ARM template json which you can download and edit.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/edit-template-inline.png" alt-text="Screenshot showing to edit template." lightbox="./media/backup-azure-configure-backup-reports/edit-template-expanded.png":::
+
+1. Locate the *resources* block in the JSON file (specifically the section where a resource of `Microsoft.Web/connections` is deployed with the Office 365 parameters).
+
+ To modify the template to support *GCCHigh*, add the subsection *parameterValueSet* to the properties section of this resource.
+
+ The updated block would look like the below:
+
+ ```json
+ {
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2018-07-01-preview",
+ "name": "[variables('office365ConnectionName')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('resourceTags')]",
+ "properties": {
+ "api": {
+ "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'office365')]"
+ },
+ "parameterValueSet": {
+ "name": "oauthGccHigh",
+ "values": {
+ "token": {
+ "value": "https://logic-apis-usgovvirginia.consent.azure-apihub.us/redirect"
+ }
+ }
+ },
+ "displayName": "office365"
+ }
+ }
+ ```
+1. Once you have the edited template, [deploy this template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template).
+
+1. Once deployed, [authorize the Azure Monitor Logs and Office 365 connections](#authorize-connections-to-azure-monitor-logs-and-office-365).
+++ ## Next steps [Learn more about Backup Reports](./configure-reports.md)
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Set up one or more Log Analytics workspaces to store your Backup reporting data.
To set up a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
-By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Manage usage and costs with Azure Monitor logs](../azure-monitor/logs/manage-cost-storage.md).
+By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
### 2. Configure diagnostics settings for your vaults
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Not all Marketplace images are compatible with the currently available Batch nod
### Node agent SKU
-The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. Essentially, when you create a Virtual Machine Configuration, you first specify the virtual machine image reference, and then you specify the node agent to install on the image. Typically, each node agent SKU is compatible with multiple virtual machine images. To view supported Marketplace VM images with their corresponding node agent SKUs, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images).
+The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. Essentially, when you create a Virtual Machine Configuration, you first specify the virtual machine image reference, and then you specify the node agent to install on the image. Typically, each node agent SKU is compatible with multiple virtual machine images. To view the supported node agent SKUs and virtual machine image compatibilities, you can use the [Azure Batch CLI command](/cli/azure/batch/pool#supported-images):
+
+```azurecli-interactive
+az batch pool supported-images list
+```
+
+For more information, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images).
## Create a Linux pool: Batch Python
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Get-AzResource -ResourceGroupName exampleRG
## Clean up resources
-### Azure CLI
-
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
# [CLI](#tab/CLI)
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
{ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}"); Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you update the subscription info?");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
} stopRecognition.TrySetResult(0);
public static async Task MultiLingualTranslation()
{ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}"); Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you update the subscription info?");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
} stopTranslation.TrySetResult(0);
else if (result->Reason == ResultReason::Canceled)
{ cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl; cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
- cout << "CANCELED: Did you update the subscription info?" << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
} } ```
void MultiLingualTranslation()
{ cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << std::endl; cout << "CANCELED: ErrorDetails=" << e.ErrorDetails << std::endl;
- cout << "CANCELED: Did you update the subscription info?" << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
recognitionEnd.set_value(); }
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Previously updated : 02/16/2022 Last updated : 04/06/2022
To delete a deployment, select the deployment you want to delete and select **De
2. Replace `<OPERATION_ID>` with the `jobId` from the previous step.
-3. Submit the `GET` cURL request in your terminal or command prompt. You'll receive a 202 response and JSON similar to the below, if the request was successful.
-
+3. Submit the `GET` cURL request in your terminal or command prompt. You'll receive a 202 response with the API results if the request was successful.
# [Using the REST API](#tab/rest-api)
First you will need to get your resource key and endpoint
### Submit custom NER task
-1. Start constructing a POST request by updating the following URL with your endpoint.
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze`
-
-2. In the header for the request, add your key to the `Ocp-Apim-Subscription-Key` header.
-
-3. In the JSON body of your request, you will specify The documents you're inputting for analysis, and the parameters for the custom entity recognition task. `project-name` is case-sensitive.
-
- > [!tip]
- > See the [quickstart article](../quickstart.md?pivots=rest-api#submit-custom-ner-task) and [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) for more information about the JSON syntax.
-
- ```json
- {
- "displayName": "MyJobName",
- "analysisInput": {
- "documents": [
- {
- "id": "doc1",
- "text": "This is a document."
- }
- ]
- },
- "tasks": {
- "customEntityRecognitionTasks": [
- {
- "parameters": {
- "project-name": "MyProject",
- "deployment-name": "MyDeploymentName"
- "stringIndexType": "TextElements_v8"
- }
- }
- ]
- }
- }
- ```
-
-4. You will receive a 202 response indicating success. In the response headers, extract `operation-location`.
-`operation-location` is formatted like this:
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId>`
-
- You will use this endpoint in the next step to get the custom recognition task results.
-
-5. Use the URL from the previous step to create a **GET** request to query the status/results of the custom recognition task. Add your key to the `Ocp-Apim-Subscription-Key` header for the request.
-+
+### Get the task results
+ # [Using the client libraries (Azure SDK)](#tab/client)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
See the [application development lifecycle](../overview.md#application-developme
## View the model's evaluation details
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. Select **View model details** from the menu on the left side of the screen.
-
-3. In this page you can only view the successfully trained models. You can click on the model name for more details.
-
-4. You can find the **model-level** evaluation metrics under **Overview**, and the **entity-level** evaluation metrics under **Entity performance metrics**. The confusion matrix for the model is located under **Test set confusion matrix**
-
- > [!NOTE]
- > If you don't find all the entities displayed here, it's because they were not in any of the files within the test set.
-
- :::image type="content" source="../media/model-details.png" alt-text="A screenshot of the model performance metrics in Language Studio" lightbox="../media/model-details.png":::
## Next steps
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
Call recordings are stored temporarily in the same geography that was selected f
## Azure Monitor and Log Analytics
-Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/manage-cost-storage.md).
+Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-archive.md).
## Additional resources
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
|Batch of participants - AddParticipant|200 | |Page size - ListMessages|200 |
+### Operation Limits
+
+| **Operation** | **Bucketed by** | **Limit per 10 seconds** | **Limit per minute** |
+|--|--|--|--|
+|Create chat thread|User|10|-|
+|Delete chat thread|User|10|-|
+|Update chat thread|Chat thread|5|-|
+|Add participants / remove participants|Chat thread|10|30|
+|Get chat thread / List chat threads|User|50|-|
+|Get chat message / List chat messages|User and chat thread|50|-|
+|Get chat message / List chat messages|Chat thread|250|-|
+|Get read receipts|User and chat thread|5|-|
+|Get read receipts|Chat thread|250|-|
+|List chat thread participants|User and chat thread|10|-|
+|List chat thread participants|Chat thread|250|-|
+|Send message / update message / delete message|Chat thread|10|30|
+|Send read receipt|User and chat thread|10|30|
+|Send typing indicator|User and chat thread|5|15|
+|Send typing indicator|Chat thread|10|30|
+ ## Voice and video calling ### Call maximum limitations
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
When initializing, code samples get the node certificate by querying Identity Se
Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their Ledger’s enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger. - **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the Ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a server’s cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isn’t signed by the returned network cert, the client connection should fail (as implemented in the sample code).-- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found here.
+- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found [here](https://github.com/openenclave/openenclave/tree/master/samples/host_verify).
## Next steps
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
This article shows how to use the Request trigger and Response action so that yo
For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests). > [!NOTE]
-> For the **Logic App (Standard)** resource type in single-tenant Azure Logic Apps, Azure AD OAuth is currently
-> unavailable for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
+>
+> In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can
+> use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger
+> by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review
+> [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
+ ## Prerequisites
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM
{ "name": "Custom-Header", "value": "liveness probe"
- }],
- "initialDelaySeconds": 7,
- "periodSeconds": 3
- }
+ }]
+ },
+ "initialDelaySeconds": 7,
+ "periodSeconds": 3
}, { "type": "readiness",
The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM
{ "name": "Custom-Header", "value": "startup probe"
- }],
- "initialDelaySeconds": 3,
- "periodSeconds": 3
- }
+ }]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
}] }] ...
containers:
httpHeaders: - name: Custom-Header value: "liveness probe"
- initialDelaySeconds: 7
- periodSeconds: 3
+ initialDelaySeconds: 7
+ periodSeconds: 3
- type: readiness tcpSocket: port: 8081
containers:
httpHeaders: - name: Custom-Header value: "startup probe"
- initialDelaySeconds: 3
- periodSeconds: 3
+ initialDelaySeconds: 3
+ periodSeconds: 3
... ```
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Last updated 12/08/2021
+adobe-target: true
# Choose an API in Azure Cosmos DB
cosmos-db Supply Chain Traceability Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/supply-chain-traceability-solution.md
Title: Infosys supply chain traceability solution using Azure Cosmos DB Gremllin API
-description: The supply chain traceability graph solution implemented by Infosys uses the Azure Cosmos DB Gremlin API and other Azure services. It provides global supply chain track and trace capability for finished goods.
+ Title: Infosys supply chain traceability solution using Azure Cosmos DB Gremlin API
+description: The Infosys solution for traceability in global supply chains uses the Azure Cosmos DB Gremlin API and other Azure services. It provides track-and-trace capability in graph form for finished goods.
-# Supply chain traceability solution using Azure Cosmos DB Gremlin API
+# Solution for supply chain traceability using the Azure Cosmos DB Gremlin API
[!INCLUDE[appliesto-gremlin-api](../includes/appliesto-gremlin-api.md)]
-This article provides an overview of [traceability graph solutions implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). These solutions use Azure Cosmos DB Gremlin API and other Azure capabilities to provide global supply chain track and trace capability for finished goods.
+This article provides an overview of the [traceability graph solution implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). This solution uses the Azure Cosmos DB Gremlin API and other Azure capabilities to provide a track-and-trace capability for finished goods in global supply chains.
-After reading this article, you will learn:
+In this article, you'll learn:
-* What is traceability in the context of a supply chain?
-* Architecture of a global traceability solution delivered using Azure Capabilities.
-* How the Azure Cosmos DB graph database helps intricate relationships between raw material and finished good in a global supply chain?
-* How does the Azure integration platform services such as API Management, Event Hub help you to integrate diverse supply chain application ecosystems?
-* How can you get help from Infosys to use this solution for your traceability need?
+* What traceability is in the context of a supply chain.
+* The architecture of a global traceability solution delivered through Azure capabilities.
+* How the Azure Cosmos DB graph database helps you track intricate relationships between raw materials and finished goods in a global supply chain.
+* How Azure integration platform services such as Azure API Management and Event Hubs help you integrate diverse application ecosystems for supply chains.
+* How you can get help from Infosys to use this solution for your traceability needs.
## Overview
-In the food supply chain, product traceability is the ability to 'track and trace' them across the supply chain throughout the productΓÇÖs lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure. In the past, some organizations failed to track and trace products effectively in their supply chain, resulting in expensive recalls, fines, and consumer health issues. The traceability solutions had to address the needs of data harmonization, data ingestion at different velocity and veracity, and, more importantly, follow the inventory cycle, objectives that weren't possible with traditional platforms.
+In the food supply chain, traceability is the ability to *track and trace* a product across the supply chain throughout the product's lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure.
-Infosys's traceability solution, developed with Azure capabilities such as application services, integration services and database services, provides vital capabilities to:
+In the past, some organizations failed to track and trace products effectively in their supply chains. Results included expensive recalls, fines, and consumer health issues.
-* Connect to factories, warehouses/distribution centers.
-* Ingest/process parallel stock movement events.
-* A knowledge graph, which shows connections between raw material, batch, finish goods (FG) pallets, multi-level parent/child relationship of pallets, goods movement.
-* User portal with a search capability range of wildcard search to specific keyword search.
-* Identify impacts of a quality incident such as impacted raw material batch, pallets affected, location of the pallets.
-* Ability to have the history of events captured across multiple markets, including product recall information.
+Traceability solutions had to address the needs of data harmonization and data ingestion at various velocities and veracities. They also had to follow the inventory cycle. These objectives weren't possible with traditional platforms.
## Solution architecture
-Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. First, these systems need to ingest bursts of data from factory and warehouse management systems that cross geographies. Next, these systems process and analyze streaming data to derive complex relationships between raw material, production batches, finished good pallets and complex parent/child relationships (co-pack/repack). Then, the system must store information about the intricate relationships between raw material, finished goods, and pallets, all necessary for traceability. A user portal with search capability allows the users to track and trace products in the supply chain network. These services enable the end-to-end traceability solution that supports cloud-native, API-first, and data-driven capabilities.
+Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. Infosys developed an end-to-end traceability solution that uses Azure application services, integration services, and database services. The solution provides these capabilities:
-Microsoft Azure offers rich services that can be applied for traceability use cases, including Azure Cosmos DB, Azure Event Hubs, Azure API Management, Azure App Service, Azure SignalR, Azure Synapse Analytics, and Power BI.
+* Receive streaming data from factories, warehouses, and distribution centers across geographies.
+* Ingest and process parallel stock-movement events.
+* View a knowledge graph that analyzes relationships between raw materials, production batches, pallets of finished goods, multilevel parent/child relationships of pallets (copack/repack), and movement of goods.
+* Access to a user portal with a search capability that includes wildcards and specific keywords.
+* Identify impacts of a quality incident, such as affected raw materials, batches, pallets, and locations of pallets.
+* Capture the history of events across multiple markets, including product recall information.
-InfosysΓÇÖs traceability solution provides a pre-baked solution that you can use to improve track and trace capability. The following image explains the architecture used for this traceability solution:
+The Infosys traceability solution supports cloud-native, API-first, and data-driven capabilities. The following diagram illustrates the architecture of this solution:
-Different Azure services used in this architecture help with the following tasks:
+The architecture uses the following Azure services to help with specialized tasks:
-* Azure Cosmos DB allows you to scale performance up or down elastically. Gremlin API allows you to create and query complex relationships between raw material, finished goods and warehouses.
-* Azure API Management provides APIs for stock movement events to the 3PLs (thirdpParty Logistic Providers) and Warehouse Management Systems (WMS).
-* Azure Event Hub provides the ability to gather large numbers of concurrent events from WMS and 3PLs for further processing.
-* Azure Function apps processes events and ingest data to Azure Cosmos DB using Gremlin API.
-* Azure Search service allows users to do complex find, filter pallet information.
-* Azure Databricks reads change feed and creates models in Synapse Analytics for self-service reporting for users in Power BI.
-* Azure Web App and App Service plan allow you to deploy the user portal.
-* Azure Storage account stores archived data for long-term regulatory needs.
+* Azure Cosmos DB enables you to scale performance up or down elastically. By using the Gremlin API, you can create and query complex relationships between raw materials, finished goods, and warehouses.
+* Azure API Management provides APIs for stock movement events to third-party logistics (3PL) providers and warehouse management systems (WMSs).
+* Azure Event Hubs provides the ability to gather large numbers of concurrent events from 3PL providers and WMSs for further processing.
+* Azure Functions (through function apps) processes events and ingests data for Azure Cosmos DB by using the Gremlin API.
+* Azure Search enables complex searches and the filtering of pallet information.
+* Azure Databricks reads the change feed and creates models in Azure Synapse Analytics for self-service reporting for users in Power BI.
+* Azure App Service and its Web Apps feature enable the deployment of a user portal.
+* Azure Storage stores archived data for long-term regulatory needs.
-## Graph DB and its data design
+## Graph database and its data design
-The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model of our traceability graph allows storing such relationships starting from the receipt of raw material, manufacturing the finished goods in a factory, transferring to different warehouses during supply chain, and finally transferring to the customer warehouse. A high-level visualization of the process looks like the following image:
+The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model in the form of a traceability graph allows storing these relationships through all the steps in the supply chain. Here's a high-level visualization of the process:
-The above diagram shows a high level and simplified view of a complex supply chain process. However, getting the vital stock movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information.
+The preceding diagram is a simplified view of a complex process. However, getting stock-movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information:
-1. The traceability process starts when the supplier sends raw materials to the factories, and the initial nodes (vertices) of the graph and relationships (edges) gets created.
+1. The traceability process starts when the supplier sends raw materials to the factories. The solution creates the initial nodes (vertices) of the graph and relationships (edges).
-1. The finished goods (Items) are produced from raw materials and packed into pallets.
+1. The finished goods are produced from raw materials and packed into pallets.
-1. The pallets are then moved to factory warehouses or the market warehouses as per customer demands/orders.
+1. The pallets are moved to factory warehouses or market warehouses according to customer orders. The warehouses might be owned by the company or by 3PL providers.
-1. The warehouse could be of companyΓÇÖs owned or 3PL (third-party Logistic Providers). The pallets are then shipped to various other warehouses as per customer orders. As per the customer demands, child pallets or child-of-child pallets are created to accommodate the ordered quantity. Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes same item gets repacked to smaller or larger quantities to a different pallet as part of a customer order.
+1. The pallets are shipped to various other warehouses according to customer orders. Depending on customers' needs, child pallets or child-of-child pallets are created to accommodate the ordered quantity.
- :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in supply chain traceability solution" lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
+ Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes the same item is repacked to smaller or larger quantities in a different pallet as part of a customer order.
-1. Pallets then travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combine with other pallets to produce new pallets to fulfill customer orders.
+ :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in the solution for supply chain traceability." lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
-1. Eventually, the system creates a complex graph that holds vital relationship information for quality incident management, which we will discuss shortly.
+1. Pallets travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combined with other pallets to produce new pallets to fulfill customer orders.
- :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Supply chain object relationship complete architecture" lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
+1. Eventually, the system creates a complex graph that holds relationship information for quality incident management.
-1. These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. Graph and its traversals provide the required information for this. For example, if there is an issue with one raw material, the graph can show the impacted pallets, current location.
+ :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Diagram that shows the complete architecture for the supply chain object relationship." lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
+
+ These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. The graph and its traversals provide the required information for this. For example, if there's an issue with one raw material, the graph can show the affected pallets and the current location.
## Next steps
-* [Infosys traceability graph solution](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview)
-* [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure)
+* Learn about [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure).
* To visualize graph data, see the [Gremlin API visualization solutions](graph-visualization-partners.md). * To model your graph data, see the [Gremlin API modeling solutions](graph-modeling-tools.md).
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
+
+ Title: Configure role-based access control for your Azure Cosmos DB API for MongoDB database (preview)
+description: Learn how to configure native role-based access control in the API for MongoDB
+++ Last updated : 04/07/2022+++
+# Configure role-based access control for your Azure Cosmos DB API for MongoDB (preview)
+
+This article is about role-based access control for data plane operations in Azure Cosmos DB API for MongoDB, currently in public preview.
+
+If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
+
+The API for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users are roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or ARM for this preview feature.
+
+## Concepts
+
+### Resource
+A resource is a collection or database to which we are applying access control rules.
+
+### Privileges
+Privileges are actions that can be performed on a specific resource. For example, "read access to collection xyz". Privileges are assigned to a specific role.
+
+### Role
+A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database.
+
+### Diagnostic log auditing
+An additional column called `userId` has been added to the `MongoRequests` table in the Azure portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
+
+## Available Privileges
+#### Query and Write
+* find
+* insert
+* remove
+* update
+
+#### Change Streams
+* changeStream
+
+#### Database Management
+* createCollection
+* createIndex
+* dropCollection
+* killCursors
+* killAnyCursor
+
+#### Server Administration
+* dropDatabase
+* dropIndex
+* reIndex
+
+#### Diagnostics
+* collStats
+* dbStats
+* listDatabases
+* listCollections
+* listIndexes
+
+## Built-in Roles
+These roles already exist on every database and do not need to be created.
+
+### read
+Has the following privileges: changeStream, collStats, find, killCursors, listIndexes, listCollections
+
+### readwrite
+Has the following privileges: collStats, createCollection, dropCollection, createIndex, dropIndex, find, insert, killCursors, listIndexes, listCollections, remove, update
+
+### dbAdmin
+Has the following privileges: collStats, createCollection, createIndex, dbStats, dropCollection, dropDatabase, dropIndex, listCollections, listIndexes, reIndex
+
+### dbOwner
+Has the following privileges: collStats, createCollection, createIndex, dbStats, dropCollection, dropDatabase, dropIndex, listCollections, listIndexes, reIndex, find, insert, killCursors, listIndexes, listCollections, remove, update
+
+## Azure CLI Setup
+We recommend using the cmd when using Windows.
+
+1. Make sure you have latest CLI version(not extension) installed locally. try `az upgrade` command.
+2. Check if you have dev extension version already installed: `az extension show -n cosmosdb-preview`. If it shows your local version, remove it using the following command: `az extension remove -n cosmosdb-preview`. It may ask you to remove it from python virtual env. If that's the case, launch your local CLI extension python env and run `azdev extension remove cosmosdb-preview` (no -n here).
+3. List the available extensions and make sure the list shows the preview version and corresponding "Compatible" flag is true.
+4. Install the latest preview version: `az extension add -n cosmosdb-preview`.
+5. Check if the preview version is installed using this: `az extension list`.
+6. Connect to your subscription.
+```powershell
+az cloud set -n AzureCloud
+az login
+az account set --subscription <your subscription ID>
+```
+7. Enable the RBAC capability on your existing API for MongoDB database account.
+```powershell
+az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongoRoleBasedAccessControl
+```
+or create a new database account with the RBAC capability set to true. Your subscription must be allow-listed in order to create an account with the EnableMongoRoleBasedAccessControl capability.
+```powershell
+az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl
+```
+8. Create a database for users to connect to in the Azure portal.
+9. Create an RBAC user with built-in read role.
+```powershell
+az cosmosdb mongodb user definition create --account-name <YOUR_DB_ACCOUNT> --resource-group <YOUR_RG> --body {\"Id\":\"testdb.read\",\"UserName\":\"<YOUR_USERNAME>\",\"Password\":\"<YOUR_PASSWORD>\",\"DatabaseName\":\"<YOUR_DB_NAME>\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"<YOUR_DB_NAME>\"}]}
+```
++
+## Authenticate using pymongo
+Sending the appName parameter is required to authenticate as a user in the preview. Here is an example of how to do so:
+```python
+from pymongo import MongoClient
+client = MongoClient("mongodb://<YOUR_HOSTNAME>:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000", username="<YOUR_USER>", password="<YOUR_PASSWORD>", authSource='<YOUR_DATABASE>', authMechanism='SCRAM-SHA-256', appName="<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>")
+```
+
+## Azure CLI RBAC Commands
+The RBAC management commands will only work with a preview version of the Azure CLI installed. See the Quickstart above on how to get started.
+
+#### Create Role Definition
+```powershell
+az cosmosdb mongodb role definition create --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.My_Read_Only_Role101\",\"RoleName\":\"My_Read_Only_Role101\",\"Type\":\"CustomRole\",\"DatabaseName\":\"test\",\"Privileges\":[{\"Resource\":{\"Db\":\"test\",\"Collection\":\"test\"},\"Actions\":[\"insert\",\"find\"]}],\"Roles\":[]}
+```
+
+#### Create Role by passing JSON file body
+```powershell
+az cosmosdb mongodb role definition create --account-name <account-name> --resource-group <resource-group-name> --body role.json
+```
+
+#### Update Role Definition
+```powershell
+az cosmosdb mongodb role definition update --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.My_Read_Only_Role101\",\"RoleName\":\"My_Read_Only_Role101\",\"Type\":\"CustomRole\",\"DatabaseName\":\"test\",\"Privileges\":[{\"Resource\":{\"Db\":\"test\",\"Collection\":\"test\"},\"Actions\":[\"insert\",\"find\"]}],\"Roles\":[]}
+```
+
+#### Update role by passing JSON file body
+```powershell
+az cosmosdb mongodb role definition update --account-name <account-name> --resource-group <resource-group-name> --body role.json
+```
+
+#### List roles
+```powershell
+az cosmosdb mongodb role definition list --account-name <account-name> --resource-group <resource-group-name>
+```
+
+#### Check if role exists
+```powershell
+az cosmosdb mongodb role definition exists --account-name <account-name> --resource-group <resource-group-name> --id test.My_Read_Only_Role
+```
+
+#### Delete role
+```powershell
+az cosmosdb mongodb role definition delete --account-name <account-name> --resource-group <resource-group-name> --id test.My_Read_Only_Role
+```
+
+#### Create user definition
+```powershell
+az cosmosdb mongodb user definition create --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.myName\",\"UserName\":\"myName\",\"Password\":\"pass\",\"DatabaseName\":\"test\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"My_Read_Only_Role101\",\"Db\":\"test\"}]}
+```
+
+#### Create user by passing JSON file body
+```powershell
+az cosmosdb mongodb user definition create --account-name <account-name> --resource-group <resource-group-name> --body user.json
+```
+
+#### Update user definition
+To update the user's password, send the new password in the password field.
+
+```powershell
+az cosmosdb mongodb user definition update --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.myName\",\"UserName\":\"myName\",\"Password\":\"pass\",\"DatabaseName\":\"test\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"My_Read_Only_Role101\",\"Db\":\"test\"}]}
+```
+
+#### Update user by passing JSON file body
+```powershell
+az cosmosdb mongodb user definition update --account-name <account-name> --resource-group <resource-group-name> --body user.json
+```
+
+#### List users
+```powershell
+az cosmosdb mongodb user definition list --account-name <account-name> --resource-group <resource-group-name>
+```
+
+#### Check if user exists
+```powershell
+az cosmosdb mongodb user definition exists --account-name <account-name> --resource-group <resource-group-name> --id test.myName
+```
+
+#### Delete user
+```powershell
+az cosmosdb mongodb user definition delete --account-name <account-name> --resource-group <resource-group-name> --id test.myName
+```
+
+## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method
+
+In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
+
+### Using Azure Resource Manager templates
+
+When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
+
+```json
+"resources": [
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "disableLocalAuth": true,
+ },
+ },
+ ]
+```
+
+## Limitations
+
+- The number of users and roles you can create must equal less than 10,000.
+- The commands listCollections, listDatabases, killCursors are excluded from RBAC in the preview.
+- Backup/Restore is not supported in the preview.
+- [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is not supported in the preview.
+- Users and Roles across databases are not supported in the preview.
+- Users must connect with a tool that support the appName parameter in the preview. Mongo shell and many GUI tools are not supported in the preview. MongoDB drivers are supported.
+- A user's password can only be set/reset by through the Azure CLI / PowerShell in the preview.
+- Configuring Users and Roles is only supported through Azure CLI / PowerShell.
+
+## Frequently asked questions (FAQs)
+
+### Which Azure Cosmos DB APIs are supported by RBAC?
+
+The API for MongoDB (preview) and the SQL API.
+
+### Is it possible to manage role definitions and role assignments from the Azure portal?
+
+Azure portal support for role management is not available yet.
+
+### Is it possible to disable the usage of the account primary/secondary keys when using RBAC?
+
+Yes, see [Enforcing RBAC as the only authentication method](#disable-local-auth).
+
+### How do I change a user's password?
+
+Update the user definition with the new password.
+
+## Next steps
+
+- Get an overview of [secure access to data in Cosmos DB](../secure-access-to-data.md).
+- Learn more about [RBAC for Azure Cosmos DB management](../role-based-access-control.md).
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
Title: Azure role-based access control in Azure Cosmos DB
description: Learn how Azure Cosmos DB provides database protection with Active directory integration (Azure RBAC). Previously updated : 06/17/2021 Last updated : 04/06/2022
[!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] > [!NOTE]
-> Azure RBAC support in Azure Cosmos DB applies to management plane operations only. This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC. To learn more about role-based access control applied to data plane operations, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles.
+> This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC.
+
+To learn more about role-based access control applied to data plane operations in the SQL API, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles. For the Cosmos DB API for MongoDB, see [Data Plane RBAC in the API for MongoDB](mongodb/how-to-setup-rbac.md).
Azure Cosmos DB provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Cosmos DB. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Cosmos DB resources. Role assignments are scoped to control-plane access only, which includes access to Azure Cosmos accounts, databases, containers, and offers (throughput). + ## Built-in roles The following are the built-in roles supported by Azure Cosmos DB:
Update-AzCosmosDBAccount -ResourceGroupName [ResourceGroupName] -Name [CosmosDBA
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) - [Azure custom roles](../role-based-access-control/custom-roles.md) - [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)
+- [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Previously updated : 08/30/2021 Last updated : 04/06/2022 - # Secure access to data in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
Azure Cosmos DB RBAC is the ideal access control method in situations where:
See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
+For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md).
+ ## <a id="resource-tokens"></a> Resource tokens Resource tokens provide access to the application resources within a database. Resource tokens:
As a database service, Azure Cosmos DB enables you to search, select, modify and
- To learn more about Cosmos database security, see [Cosmos DB Database security](database-security.md). - To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources).-- User management samples with users and permissions, [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)
+- For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)
+- For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-pull-model.md
ms.devlang: csharp Previously updated : 08/02/2021 Last updated : 04/07/2022
FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(Ch
Here's an example for obtaining a `FeedIterator` that returns a `Stream`: ```csharp
-FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
``` If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time: ```csharp
-FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
while (iteratorForTheEntireContainer.HasMoreResults) {
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 03/07/2022 Last updated : 04/07/2022 ms.devlang: csharp
ms.devlang: csharp
> To learn about the Azure Cosmos DB .NET SDK v3, see the [Release notes](sql-api-sdk-dotnet-standard.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v3), .NET SDK v3 [Performance Tips](performance-tips-dotnet-sdk-v3-sql.md), and the [Troubleshooting guide](troubleshoot-dot-net-sdk.md). >
-This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for Core (SQL) API. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Cosmos namespace. You can use the information provided in this doc if you are migrating your application from any of the following Azure Cosmos DB .NET SDKs:
+This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for Core (SQL) API. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Cosmos namespace. You can use the information provided in this doc if you're migrating your application from any of the following Azure Cosmos DB .NET SDKs:
* Azure Cosmos DB .NET Framework SDK v2 for SQL API * Azure Cosmos DB .NET Core SDK v2 for SQL API
Most of the networking, retry logic, and lower levels of the SDK remain largely
## Why migrate to the .NET v3 SDK
-In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK will not be back ported to older versions.
+In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK won't be back ported to older versions.
The v2 SDK is currently in maintenance mode. For the best development experience, we recommend always starting with the latest supported version of SDK. ## Major name changes from v2 SDK to v3 SDK
The following classes have been replaced on the 3.0 SDK:
The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design. The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
-Because the .NET v3 SDK allows users to configure a custom serialization engine, there is no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
### Changes to item ID generation
The v3 SDK has built-in support for the Change Feed Processor APIs, allowing you
For more information, see [how to migrate from the change feed processor library to the Azure Cosmos DB .NET v3 SDK](how-to-migrate-from-change-feed-library.md)
+### Change feed queries
+
+Executing change feed queries on the v3 SDK is considered to be using the [change feed pull model](change-feed-pull-model.md). Follow this table to migrate configuration:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#using-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consuming-an-entire-containers-changes) easily now.|
+|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consuming-a-partition-keys-changes).|
+|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`ChangeFeedOptions.StartTime`|`ChangeFeedStartFrom.Time` |
+|`ChangeFeedOptions.StartFromBeginning` |`ChangeFeedStartFrom.Beginning` |
+|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#saving-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. |
+|Split handling|It's no longer required for users to handle split exceptions when reading the change feed, splits will be handled transparently without the need of user interaction.|
+ ### Using the bulk executor library directly from the V3 SDK The v3 SDK has built-in support for the bulk executor library, allowing you to use the same SDK for building your application and performing bulk operations. Previously, you were required to use a separate bulk executor library.
private static async Task DeleteItemAsync(DocumentClient client)
```
+### Change feed query
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task QueryChangeFeedAsync(Container container)
+{
+ FeedIterator<SalesOrder> iterator = container.GetChangeFeedIterator<SalesOrder>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+
+ string continuation = null;
+ while (iterator.HasMoreResults)
+ {
+ FeedResponse<SalesOrder> response = await iteratorForTheEntireContainer.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ // No new changes
+ continuation = response.ContinuationToken;
+ break;
+ }
+ else
+ {
+ // Process the documents in response
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task QueryChangeFeedAsync(DocumentClient client, string partitionKeyRangeId)
+{
+ ChangeFeedOptions options = new ChangeFeedOptions
+ {
+ PartitionKeyRangeId = partitionKeyRangeId,
+ StartFromBeginning = true,
+ };
+
+ using(var query = client.CreateDocumentChangeFeedQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName), options))
+ {
+ do
+ {
+ var response = await query.ExecuteNextAsync<Document>();
+ if (response.Count > 0)
+ {
+ var docs = new List<Document>();
+ docs.AddRange(response);
+ // Process the documents.
+ // Save response.ResponseContinuation if needed
+ }
+ }
+ while (query.HasMoreResults);
+ }
+}
+```
++ ## Next steps * [Build a Console app](sql-api-get-started.md) to manage Azure Cosmos DB SQL API data using the v3 SDK
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-nodejs.md
In this quickstart, you create an Azure Cosmos DB Table API account, and use Dat
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
- [Node.js 0.10.29+](https://nodejs.org/) . - [Git](https://git-scm.com/downloads).
-## Create a database account
+## Sample application
-> [!IMPORTANT]
-> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs.
->
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js). Both a starter and completed app are included in the sample repository.
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js
+```
-## Add a table
+The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
-## Add sample data
+## 1 - Create an Azure Cosmos DB account
+You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-## Clone the sample application
+### [Azure portal](#tab/azure-portal)
-Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create a Cosmos DB account.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
- ```bash
- md "C:\git-samples"
- ```
+### [Azure CLI](#tab/azure-cli)
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
- ```bash
- cd "C:\git-samples"
- ```
+Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
- ```bash
- git clone https://github.com/Azure-Samples/storage-table-node-getting-started.git
- ```
+It typically takes several minutes for the Cosmos DB account creation process to complete.
-> [!TIP]
-> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](how-to-use-nodejs.md) article.
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
-## Review the code
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
-* The following code shows how to create a table within the Azure Storage:
+### [Azure PowerShell](#tab/azure-powershell)
- ```javascript
- storageClient.createTableIfNotExists(tableName, function (error, createResult) {
- if (error) return callback(error);
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
- if (createResult.isSuccessful) {
- console.log("1. Create Table operation executed successfully for: ", tableName);
- }
- }
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
+++
+## 2 - Create a table
+
+Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-dotnet/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-dotnet/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-dotnet/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for a Cosmos DB table." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
+++
+## 3 - Get Cosmos DB connection string
+
+To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-dotnet/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-dotnet/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+
+The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
+++
+## 4 - Install the Azure Data Tables SDK for JS
+
+To access the Cosmos DB Table API from a nodejs application, install the [Azure Data Tables SDK](https://www.npmjs.com/package/@azure/data-tables) package.
+
+```bash
+ npm install @azure/data-tables
+```
+
+## 5 - Configure the Table client in env.js file
+
+Copy your Cosmos DB or Storage account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `configure/env.js` file.
- ```
+```js
+const env = {
+ connectionString:"A connection string to an Azure Storage or Cosmos account.",
+ tableName: "WeatherData",
+};
+```
-* The following code shows how to insert data into the table:
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableClient` class is the class used to communicate with the Cosmos DB Table API. An application will typically create a single `serviceClient` object per table to be used throughout the application.
- ```javascript
- var customer = createCustomerEntityDescriptor("Harp", "Walter", "Walter@contoso.com", "425-555-0101");
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
- storageClient.insertOrMergeEntity(tableName, customer, function (error, result, response) {
- if (error) return callback(error);
++
+## 6 - Implement Cosmos DB table operations
+
+All Cosmos DB table operations for the sample app are implemented in the `serviceClient` object located in `tableClient.js` file under *service* directory.
+
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
+
+### Get rows from a table
+
+The `serviceClient` object contains a method named `listEntities` which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+```js
+const allRowsEntities = serviceClient.listEntities();
+```
+
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the `listEntities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00' and RowKey le '2021-07-02 12:00'
+```
+
+You can view all OData filter operators on the OData website in the section Filter [System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
- console.log(" insertOrMergeEntity succeeded.");
+When request.args parameter is passed to the `listEntities` method in the `serviceClient` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `listEntities` method on the `serviceClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```js
+const filterEntities = async function (option) {
+ /*
+ You can query data according to existing fields
+ option provides some conditions to query,eg partitionKey, rowKeyDateTimeStart, rowKeyDateTimeEnd
+ minTemperature, maxTemperature, minPrecipitation, maxPrecipitation
+ */
+ const filterEntitiesArray = [];
+ const filters = [];
+ if (option.partitionKey) {
+ filters.push(`PartitionKey eq '${option.partitionKey}'`);
+ }
+ if (option.rowKeyDateTimeStart) {
+ filters.push(`RowKey ge '${option.rowKeyDateTimeStart}'`);
+ }
+ if (option.rowKeyDateTimeEnd) {
+ filters.push(`RowKey le '${option.rowKeyDateTimeEnd}'`);
+ }
+ if (option.minTemperature !== null) {
+ filters.push(`Temperature ge ${option.minTemperature}`);
+ }
+ if (option.maxTemperature !== null) {
+ filters.push(`Temperature le ${option.maxTemperature}`);
}
- ```
-
-* The following code shows how to query data from the table:
-
- ```javascript
- console.log("6. Retrieving entities with surname of Smith and first names > 1 and <= 75");
-
- var storageTableQuery = storage.TableQuery;
- var segmentSize = 10;
-
- // Demonstrate a partition range query whereby we are searching within a partition for a set of entities that are within a specific range.
- var tableQuery = new storageTableQuery()
- .top(segmentSize)
- .where('PartitionKey eq ?', lastName)
- .and('RowKey gt ?', "0001").and('RowKey le ?', "0075");
-
- runPageQuery(tableQuery, null, function (error, result) {
-
- if (error) return callback(error);
-
- ```
-
-* The following code shows how to delete data from the table:
-
- ```javascript
- storageClient.deleteEntity(tableName, customer, function entitiesQueried(error, result) {
- if (error) return callback(error);
-
- console.log(" deleteEntity succeeded.");
+ if (option.minPrecipitation !== null) {
+ filters.push(`Precipitation ge ${option.minPrecipitation}`);
}
- ```
-
-## Update your connection string
+ if (option.maxPrecipitation !== null) {
+ filters.push(`Precipitation le ${option.maxPrecipitation}`);
+ }
+ const res = serviceClient.listEntities({
+ queryOptions: {
+ filter: filters.join(" and "),
+ },
+ });
+ for await (const entity of res) {
+ filterEntitiesArray.push(entity);
+ }
+
+ return filterEntitiesArray;
+};
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `createEntity` method on the `serviceClient` object is used to insert data into the table.
-Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+Modify the `insertEntity` function in the example application to contain the following code.
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+```js
+const insertEntity = async function (entity) {
- :::image type="content" source="./media/create-table-nodejs/connection-string.png" alt-text="View and copy the required connection string information from the in the Connection String pane":::
+ await serviceClient.createEntity(entity);
-2. Copy the PRIMARY CONNECTION STRING using the copy button on the right side.
+};
+```
+### Upsert data using a TableEntity object
-3. Open the *app.config* file, and paste the value into the connectionString on line three.
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsertEntity` instead of the `createEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsertEntity` method will update the existing row. Otherwise, the row will be added to the table.
- > [!IMPORTANT]
- > If your Endpoint uses documents.azure.com, that means you have a preview account, and you need to create a [new Table API account](#create-a-database-account) to work with the generally available Table API SDK.
- >
+```js
+const upsertEntity = async function (entity) {
-3. Save the *app.config* file.
+ await serviceClient.upsertEntity(entity, "Merge");
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+};
+```
+### Insert or upsert data with variable properties
-## Run the app
+One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
-1. In the git terminal window, `cd` to the storage-table-java-getting-started folder.
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
- ```
- cd "C:\git-samples\storage-table-node-getting-started"
- ```
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a `TableEntity` object and use the `createEntity` or `upsertEntity` methods on the `serviceClient` object as appropriate.
-2. Run the following command to install the [azure], [node-uuid], [nconf] and [async] modules locally as well as to save an entry for them to the *package.json* file.
+In the sample application, the `upsertEntity` function can also implement the function of insert or upsert data with variable properties
- ```
- npm install azure-storage node-uuid async nconf --save
- ```
+```js
+const insertEntity = async function (entity) {
+ await serviceClient.createEntity(entity);
+};
-2. In the git terminal window, run the following commands to run the Node.js application.
+const upsertEntity = async function (entity) {
+ await serviceClient.upsertEntity(entity, "Merge");
+};
+```
+### Update an entity
- ```
- node ./tableSample.js
- ```
+Entities can be updated by calling the `updateEntity` method on the `serviceClient` object.
- The console window displays the table data being added to the new table database in Azure Cosmos DB.
+In the sample app, this object is passed to the `upsertEntity` method in the `serviceClient` object. It updates that entity object and uses the `upsertEntity` method save the updates to the database.
- You can now go back to Data Explorer and see, query, modify, and work with this new data.
+```js
+const updateEntity = async function (entity) {
+ await serviceClient.updateEntity(entity, "Replace");
+};
+```
-## Review SLAs in the Azure portal
+## 7 - Run the code
+Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
+ ## Clean up resources
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/create-table-dotnet/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/create-table-dotnet/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/create-table-dotnet/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
++ ## Next steps
cost-management-billing Buy Vm Software Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/buy-vm-software-reservation.md
+
+ Title: Prepay for Virtual machine software reservations
+description: Learn how to prepay for Azure virtual machine software reservations to save money.
+++++ Last updated : 03/17/2022+++
+# Prepay for Virtual machine software reservations (Azure Marketplace)
+
+When you prepay for your virtual machine software usage (available in the Azure Marketplace), you can save money over your pay-as-you-go costs. The discount is automatically applied to a deployed plan that matches the reservation, not on the virtual machine usage. You can buy reservations for virtual machines separately for more savings.
+
+You can buy virtual machine software reservation in the Azure portal. To buy a reservation:
+
+- You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.
+- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans.
+
+## Buy a virtual machine software reservation
+
+1. Select your desired plan from Azure Marketplace that has reservation pricing.
+2. Select **Purchase** and then select the Virtual machine software reservation that you want to buy.
+Any virtual machine software reservation that matches the attributes of what you buy gets a discount. The actual number of deployments that get the discount depend on the scope and quantity selected.
+3. Select a subscription. It's used to pay for the plan.
+The subscription payment method is charged the upfront costs for the reservation. To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
+ - For an enterprise subscription, these reservation purchase charges are not deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance. The charges are billed to the subscription's credit card or invoice payment method.
+ - For an individual subscription with pay-as-you-go pricing, the charges are billed to the subscription's credit card or invoice payment method.
+4. Select a scope. The scope can cover one subscription or multiple subscriptions (using a shared scope).
+ - Single subscription - The plan discount is applied to matching usage in the subscription.
+ - Shared - The plan discount is applied to matching instances in any subscription in your billing context. For enterprise customers, the billing context is the enrollment and includes all subscriptions in the enrollment. For individual plan with pay-as-you-go pricing customers, the billing context is all individual plans with pay-as-you-go pricing subscriptions created by the account administrator.
+ - Management group - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
+ - Single resource group - Applies the reservation discount to the matching resources in the selected resource group only.
+5. Select a product to choose the VM size and the image type. The discount will apply to matching resources and has instance size flexibility turned on.
+6. Select a one-year or three-year term.
+7. Choose a quantity, which is the number of prepaid VM instances that can get the billing discount.
+8. Add the product to the cart, review and purchase.
+
+The reservation discount is automatically applied to the software meter that you pre-pay for. VM compute charges aren't covered by the software reservation. You can purchase the VM reservations separately.
+
+## Discount applies to different VM sizes
+
+Like Reserved VM Instances, Virtual machine software reservation purchases offer instance size flexibility. So, your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the Virtual machine software reservation.
+
+## Self-service cancellation and exchanges
+
+You can't exchange a Virtual machine software reservation that you bought yourself. You can however, cancel the reservation within 72 hours of purchase. The [cancellation limit](exchange-and-refund-azure-reservations.md#cancel-exchange-and-refund-policies) applies.
+
+Check your usage before purchasing to make sure you buy the right software reservation.
+
+## Next steps
+
+- To learn more about Azure Reservations, see the following articles:
+ - [What are Azure Reservations?](save-compute-costs-reservations.md)
+ - [Understand how the Azure virtual machine software reservation discount is applied](understand-vm-software-reservation-discount.md)
+ - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Synapse Analytics - data warehouse](prepay-sql-data-warehouse-charges.md) - [Synapse Analytics - Pre-purchase](synapse-analytics-pre-purchase-plan.md) - [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Virtual machine software](buy-vm-software-reservation.md)
## Buy reservations with monthly payments
cost-management-billing Understand Vm Software Reservation Discount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-vm-software-reservation-discount.md
+
+ Title: Understand how the Azure virtual machine software reservation discount is applied
+description: Learn how Azure virtual machine software reservation discount is applied before you buy.
+++++ Last updated : 03/09/2022+++
+# Understand how the virtual machine software reservation (Azure Marketplace) discount is applied
+
+After you buy a virtual machine software reservation, the discount is automatically applied to deployed plan that matches the reservation. A software reservation only covers the cost of running the software plan you chose on an Azure VM.
+
+To buy the right virtual machine software reservation, you need to understand the software plan you want to run and the number of vCPUs on those VMs.
+
+## Discount applies to different VM sizes
+
+Like Reserved VM Instances, virtual machine software reservation purchases offer instance size flexibility. So, your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the virtual machine software reservation.
+
+For example, if you buy a virtual machine software reservation for a VM with one vCPU, the ratio for that reservation is 1 and using a two vCPU machine. It covers 50% of the cost if the ratio is 1:2. It's based on how the software plan was configured by the publisher.
+
+## Prepay for virtual machine software reservations
+
+When you prepay for your virtual machine software usage (available in the Azure Marketplace), you can save money over your pay-as-you-go costs. The discount is automatically applied to a deployed plan that matches the reservation, not on the virtual machine usage. You can buy reservations for virtual machines separately for more savings.
+
+You can buy a virtual machine software reservation in the Azure portal. To buy a reservation:
+
+- You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.
+- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+To learn more about Azure Reservations, see the following articles:
+
+- [What are reservations for Azure?](save-compute-costs-reservations.md)
+- [Prepay for Azure virtual machine software reservations](buy-vm-software-reservation.md)
+- [Prepay for Virtual Machines with Azure Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)
+- [Prepay for SQL Database compute resources with Azure SQL Database reserved capacity](../../azure-sql/database/reserved-capacity-overview.md)
+- [Manage reservations for Azure](manage-reserved-vm-instance.md)
+- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
+- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Data Factory offers three types of Integration Runtime (IR), and you should choo
The following table describes the capabilities and network support for each of the integration runtime types:
-IR type | Public network | Private network
-- | -- |
-Azure | Data Flow<br/>Data movement<br/>Activity dispatch | Data Flow<br/>Data movement<br/>Activity dispatch
-Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity dispatch
-Azure-SSIS | SSIS package execution | SSIS package execution
+IR type | Public Network Support | Private Link Support |
+- | -- | |
+Azure | Data Flow<br/>Data movement<br/>Activity dispatch | Data Flow<br/>Data movement<br/>Activity dispatch |
+Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity dispatch |
+Azure-SSIS | SSIS package execution | SSIS package execution |
+
+> [!NOTE]
+> Outbound controls vary by service for Azure IR. In Synapse, workspaces have options to limit outbound traffic from the [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md) when utilizing Azure IR. In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network) when utilizing Azure IR. Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
## Azure integration runtime
An Azure integration runtime can:
### Azure IR network environment
-Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment.
+Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment. In Synapse, workspaces have options to limit outbound traffic from the IR [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md). In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network). The Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
### Azure IR compute resource and scaling Azure integration runtime provides a fully managed, serverless compute in Azure. You don't have to worry about infrastructure provision, software installation, patching, or capacity scaling. In addition, you only pay for the duration of the actual utilization.