Updates from: 04/08/2022 01:11:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/age-gating.md
Previously updated : 08/24/2021 Last updated : 04/07/2022 zone_pivot_groups: b2c-policy-type
When you sign-in as a minor, you should see the following error message: *Unfort
## Enable age gating in your custom policy
-1. Get the example of an age gating policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies).
+1. Get the example of an age gating policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/age-gating).
1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`. 1. Upload the policy files.
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
workspace("AD-B2C-TENANT1").AuditLogs
## Change the data retention period
-Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period).
+Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/cost-logs.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/data-retention-archive.md).
## Next steps
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
A best practice when you troubleshoot problems with password writeback is to ins
| Code | Name or message | Description | | | | | | 6329 | BAIL: MMS(4924) 0x80230619: "A restriction prevents the password from being changed to the current one specified." | This event occurs when the password writeback service attempts to set a password on your local directory that doesn't meet the password age, history, complexity, or filtering requirements of the domain. <br> <br> If you have a minimum password age and have recently changed the password within that window of time, you're not able to change the password again until it reaches the specified age in your domain. For testing purposes, the minimum age should be set to 0. <br> <br> If you have password history requirements enabled, then you must select a password that hasn't been used in the last *N* times, where *N* is the password history setting. If you do select a password that has been used in the last *N* times, then you see a failure in this case. For testing purposes, the password history should be set to 0. <br> <br> If you have password complexity requirements, all of them are enforced when the user attempts to change or reset a password. <br> <br> If you have password filters enabled and a user selects a password that doesn't meet the filtering criteria, then the reset or change operation fails. |
-| 6329 | MMS(3040): admaexport.cpp(2837): The server doesn't contain the LDAP password policy control. | This problem occurs if LDAP_SERVER_POLICY_HINTS_OID control (1.2.840.113556.1.4.2066) isn't enabled on the DCs. To use the password writeback feature, you must enable the control. To do so, the DCs must be on Windows Server 2008R2 or later. |
+| 6329 | MMS(3040): admaexport.cpp(2837): The server doesn't contain the LDAP password policy control. | This problem occurs if LDAP_SERVER_POLICY_HINTS_OID control (1.2.840.113556.1.4.2066) isn't enabled on the DCs. To use the password writeback feature, you must enable the control. To do so, the DCs must be on Windows Server 2016 or later. |
| HR 8023042 | Synchronization Engine returned an error hr=80230402, message=An attempt to get an object failed because there are duplicated entries with the same anchor. | This error occurs when the same user ID is enabled in multiple domains. An example is if you're syncing account and resource forests and have the same user ID present and enabled in each forest. <br> <br> This error can also occur if you use a non-unique anchor attribute, like an alias or UPN, and two users share that same anchor attribute. <br> <br> To resolve this problem, ensure that you don't have any duplicated users within your domains and that you use a unique anchor attribute for each user. | ### If the source of the event is PasswordResetService
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 03/31/2022 Last updated : 04/07/2022
For just-in-time (JIT) redemptions, where redemption is through a tenanted appli
## Consent experience for the guest
-When a guest signs in to access resources in a partner organization for the first time, they're guided through the following pages.
+When a guest signs in to a resource in a partner organization for the first time, they're presented with the following consent experience. These consent pages are shown to the guest only after sign-in, and they aren't displayed at all if the user has already accepted them.
1. The guest reviews the **Review permissions** page describing the inviting organization's privacy statement. A user must **Accept** the use of their information in accordance to the inviting organization's privacy policies to continue.
When a guest signs in to access resources in a partner organization for the firs
![Screenshot showing the Apps access panel](media/redemption-experience/myapps.png)
-> [!NOTE]
-> The consent experience appears only after the user signs in, and not before. There are some scenarios where the consent experience will not be displayed to the user, for example:
-> - The user already accepted the consent experience
-> - The admin [grants tenant-wide admin consent to an application](../manage-apps/grant-admin-consent.md)
- In your directory, the guest's **Invitation accepted** value changes to **Yes**. If an MSA was created, the guestΓÇÖs **Source** shows **Microsoft Account**. For more information about guest user account properties, see [Properties of an Azure AD B2B collaboration user](user-properties.md). If you see an error that requires admin consent while accessing an application, see [how to grant admin consent to apps](../develop/v2-admin-consent.md).
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
The following table contains estimated costs per month for a basic Event Hub in
-To review costs related to managing the Azure Monitor logs, see [Manage cost by controlling data volume and retention in Azure Monitor logs](../../azure-monitor/logs/manage-cost-storage.md).
+To review costs related to managing the Azure Monitor logs, see [Azure Monitor Logs pricing details](../../azure-monitor/logs/cost-logs.md).
## Frequently asked questions
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 03/17/2022 Last updated : 04/03/2022
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e | > | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 |
+> | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 |
> | [Windows 365 Administrator](#windows-365-administrator) | Can provision and manage all aspects of Cloud PCs. | 11451d60-acb2-45eb-a7d6-43d0f0125c13 | > | [Windows Update Deployment Administrator](#windows-update-deployment-administrator) | Can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. | 32696413-001a-46ae-978c-ce0f6b3620d2 |
Users with this role have all permissions in the Azure Information Protection se
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
User can create and manage policy keys and secrets for token encryption, token s
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and update all properties of authorization policies |
+> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C |
## B2C IEF Policy Administrator
Users in this role have the ability to create, read, update, and delete all cust
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cTrustFrameworkPolicy/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C |
+> | microsoft.directory/b2cTrustFrameworkPolicy/allProperties/allTasks | Read and configure custom policies in Azure Active Directory B2C |
## Billing Administrator
Users in this role can enable, disable, and delete devices in Azure AD and read
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD |
In | Can do
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users with this role have the ability to manage Azure Active Directory Condition
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
## Customer LockBox Access Approver
Users in this role can manage the Desktop Analytics service. This includes the a
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics |
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/applications/tag/update | Update tags of applications |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD | > | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD |
Users with this role can create and manage user flows (also called "built-in" po
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cUserFlow/allProperties/allTasks | Read and configure user attributes in Azure Active Directory B2C |
+> | microsoft.directory/b2cUserFlow/allProperties/allTasks | Read and configure user flow in Azure Active Directory B2C |
## External ID User Flow Attribute Administrator
Users with this role add or delete custom attributes available to all user flows
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cUserAttribute/allProperties/allTasks | Read and configure custom policies in Azure Active Directory B2C |
+> | microsoft.directory/b2cUserAttribute/allProperties/allTasks | Read and configure user attribute in Azure Active Directory B2C |
## External Identity Provider Administrator
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users | > | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/connectors/create | Create application proxy connectors |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties | > | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/allTasks | Manage all aspects of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/cloudAppSecurity/allProperties/read | Read all properties for Cloud app security | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies | > | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/read | Read all properties of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Users with this role have global permissions to manage settings within Microsoft
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can add, remove, and update license assignments on users, gro
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/reprocessLicenseAssignment | Reprocess license assignments for group-based licensing | > | microsoft.directory/users/assignLicense | Manage user licenses |
Users with this role can manage role assignments in Azure Active Directory, as w
> | microsoft.directory/accessReviews/definitions.groupsAssignableToRoles/delete | Delete access reviews for membership in groups that are assignable to Azure AD roles | > | microsoft.directory/accessReviews/definitions.groups/allProperties/read | Read all properties of access reviews for membership in Security and Microsoft 365 groups, including role-assignable groups. | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups | > | microsoft.directory/groupsAssignableToRoles/delete | Delete role-assignable groups |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | | | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | > | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/servicePrincipals/policies/update | Update policies of service principals |
Users with this role can manage alerts and have global read-only access on secur
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Identity Protection Center | Read all security reports and settings information
> | | | > | microsoft.directory/accessReviews/definitions/allProperties/read | Read all properties of access reviews of all reviewable resources in Azure AD | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.directory/groups/hiddenMembers/read | Read hidden members of Security groups and Microsoft 365 groups, including role-assignable groups | > | microsoft.directory/groups.unified/create | Create Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups, excluding role-assignable groups |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
## Teams Communications Administrator
Users in this role can manage aspects of the Microsoft Teams workload related to
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users with this role can create users, and manage all aspects of users with some
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Virtual Visits Administrator
+
+Users with this role can do the following tasks:
+
+- Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams EHR connector
+- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and PowerBI
+- View features and settings in the Microsoft 365 admin center, but can't edit any settings
+
+Virtual Visits are a simple way to schedule and manage online and video appointments for staff and attendees. For example, usage reporting can show how sending SMS text messages before appointments can reduce the number of people who don't show up for appointments.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups.
active-directory Adobe Identity Management Provisioning Oidc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md
+
+ Title: 'Tutorial: Configure Adobe Identity Management (OIDC) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Adobe Identity Management (OIDC).
+
+documentationcenter: ''
+
+writer: twimmers
++
+ms.assetid: baa54168-d23a-49d8-94d1-28476138cd90
+++
+ na
+ Last updated : 04/06/2022+++
+# Tutorial: Configure Adobe Identity Management (OIDC) for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Adobe Identity Management (OIDC) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to Adobe Identity Management (OIDC) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Adobe Identity Management (OIDC)
+> * Disable users in Adobe Identity Management (OIDC) when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Adobe Identity Management (OIDC)
+> * Provision groups and group memberships in Adobe Identity Management (OIDC)
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Adobe Identity Management (OIDC) (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A federated directory in the [Adobe Admin Console](https://adminconsole.adobe.com/) with verified domains.
+* Review the [adobe documentation](https://helpx.adobe.com/enterprise/admin-guide.html/enterprise/using/add-azure-sync.ug.html) on user provisioning
+
+> [!NOTE]
+> If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure Portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Adobe Identity Management (OIDC)](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Adobe Identity Management (OIDC) to support provisioning with Azure AD
+
+1. Login to [Adobe Admin Console](https://adminconsole.adobe.com/). Navigate to **Settings > Directory Details > Sync**.
+
+1. Click **Add Sync**.
+
+ ![Add](media/adobe-identity-management-provisioning-tutorial/add-sync.png)
+
+1. Select **Sync users from Microsoft Azure** and click **Next**.
+
+ ![Screenshot that shows 'Sync users from Microsoft Azure Active Directory' selected.](media/adobe-identity-management-provisioning-tutorial/sync-users.png)
+
+1. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management (OIDC) application in the Azure portal.
+
+ ![Sync](media/adobe-identity-management-provisioning-tutorial/token.png)
+
+## Step 3. Add Adobe Identity Management (OIDC) from the Azure AD application gallery
+
+Add Adobe Identity Management (OIDC) from the Azure AD application gallery to start managing provisioning to Adobe Identity Management (OIDC). If you have previously setup Adobe Identity Management (OIDC) for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Adobe Identity Management (OIDC)
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Adobe Identity Management (OIDC) in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Adobe Identity Management (OIDC)**.
+
+ ![The Adobe Identity Management (OIDC) link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Adobe Identity Management (OIDC) Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management (OIDC). If the connection fails, ensure your Adobe Identity Management (OIDC) account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management (OIDC)**.
+
+1. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management (OIDC) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management (OIDC) for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management (OIDC) API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management (OIDC)
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |addresses[type eq "work"].country|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management (OIDC)**.
+
+1. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management (OIDC) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management (OIDC) for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management (OIDC)
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Adobe Identity Management (OIDC), change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Adobe Identity Management (OIDC) by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Adobe Identity Man
## Capabilities supported > [!div class="checklist"] > * Create users in Adobe Identity Management
-> * Remove users in Adobe Identity Management when they do not require access anymore
+> * Disable users in Adobe Identity Management when they do not require access anymore
> * Keep user attributes synchronized between Azure AD and Adobe Identity Management > * Provision groups and group memberships in Adobe Identity Management
-> * Single sign-on to Adobe Identity Management (recommended)
+> * [Single sign-on](adobe-identity-management-tutorial.md) to Adobe Identity Management (recommended)
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Adobe Identity Management to support provisioning with Azure AD 1. Login to [Adobe Admin Console](https://adminconsole.adobe.com/). Navigate to **Settings > Directory Details > Sync**.
-2. Click **Add Sync**.
+1. Click **Add Sync**.
![Add](media/adobe-identity-management-provisioning-tutorial/add-sync.png)
-3. Select **Sync users from Microsoft Azure** and click **Next**.
+1. Select **Sync users from Microsoft Azure** and click **Next**.
![Screenshot that shows 'Sync users from Microsoft Azure Active Directory' selected.](media/adobe-identity-management-provisioning-tutorial/sync-users.png)
-4. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
+1. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
![Sync](media/adobe-identity-management-provisioning-tutorial/token.png)
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Adobe Identity Management**.
+1. In the applications list, select **Adobe Identity Management**.
![The Adobe Identity Management link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Provisioning tab](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
-9. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |userName|String|
- |emails[type eq "work"].value|String|
- |active|Boolean|
- |addresses[type eq "work"].country|String|
- |name.givenName|String|
- |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |addresses[type eq "work"].country|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
-11. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |displayName|String|
- |members|Reference|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|userName|String|&check;|&check; |externalId|String|&check;|&check; |userType|String||&check;
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
|active|Boolean|| |title|String||&check; |emails[type eq "work"].value|String||&check;
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| |active|Boolean|| |title|String||
- |emails[type eq "work"].value|String||
+ |emails[type eq "work"].value|String||&check;
|name.givenName|String||&check; |name.familyName|String||&check; |phoneNumbers[type eq "work"].value|String||
Once you've configured provisioning, use the following resources to monitor your
* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Change Log
-03/23/2022 - Added support for **Group Provisioning**.
+* 03/23/2022 - Added support for **Group Provisioning**.
+* 04/06/2022 - **emails[type eq "work"].value** is made a required attribute.
## More resources
active-directory Kantegassoforjira Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforjira-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Kantega SSO for JIRA | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Kantega SSO for JIRA.
+ Title: 'Tutorial: Integrate Azure Active Directory with Kantega SSO for JIRA | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Jira using Kantega SSO.
Previously updated : 05/27/2021 Last updated : 04/04/2022
-# Tutorial: Azure Active Directory integration with Kantega SSO for JIRA
+# Tutorial: Integrate Azure Active Directory with Kantega SSO for JIRA
-In this tutorial, you'll learn how to integrate Kantega SSO for JIRA with Azure Active Directory (Azure AD). When you integrate Kantega SSO for JIRA with Azure AD, you can:
+This tutorial will walk you through the steps of configuring single sign-on for your Azure AD users in Jira. To achieve this, we will be using the Kantega SSO app. Using this configuration, you will be able to:
-* Control in Azure AD who has access to Kantega SSO for JIRA.
-* Enable your users to be automatically signed-in to Kantega SSO for JIRA with their Azure AD accounts.
+* Control which users have Jira access from Azure AD.
+* Automatically sign in to Jira when you have an active Azure AD session.
* Manage your accounts in one central location - the Azure portal.
-## Prerequisites
+Read more on the official [Kantega SSO documentation](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/895844483/Azure+AD).
-To configure Azure AD integration with Kantega SSO for JIRA, you need the following items:
+## Prerequisites
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+To follow this tutorial, you need:
-* Kantega SSO for JIRA single sign-on enabled subscription.
+* An active Azure AD subscription. You can set up a [free account](https://azure.microsoft.com/free/).
+* A Jira Data Center instance. You can [try it for free](https://www.atlassian.com/software/jira/download/data-center).
+* Kantega SSO app for Jira from Atlassian Marketplace. You can [try it for free](https://marketplace.atlassian.com/apps/1211923/k-sso-saml-kerberos-openid-oidc-oauth-for-jira?tab=overview&hosting=datacenter).
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you will configure and test single sign-on with Azure AD in a Jira test environment.
-* Kantega SSO for JIRA supports **SP and IDP** initiated SSO.
+* Kantega SSO supports **SAML and OIDC**.
+* Kantega SSO supports **SP and IDP** initiated SSO.
+* Kantega SSO supports Automated user provisioning and deprovisioning (recommended).
+* Kantega SSO supports Just-in-Time user provisioning.
## Add Kantega SSO for JIRA from the gallery
To configure the integration of Kantega SSO for JIRA into Azure AD, you need to
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
+1. To add a new application, select **New application**.
1. In the **Add from the gallery** section, type **Kantega SSO for JIRA** in the search box.
-1. Select **Kantega SSO for JIRA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Select **Kantega SSO for JIRA** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
## Configure and test Azure AD SSO for Kantega SSO for JIRA
To configure and test Azure AD SSO with Kantega SSO for JIRA, perform the follow
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Kantega SSO for JIRA SSO](#configure-kantega-sso-for-jira-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Kantega SSO for JIRA test user](#create-kantega-sso-for-jira-test-user)** - to have a counterpart of B.Simon in Kantega SSO for JIRA that is linked to the Azure AD representation of user.
+1. **[Configure Kantega SSO for JIRA SSO](#configure-kantega-sso-for-jira-sso)** - to configure the single sign-on settings on the application side.
+ 1. **[Create Kantega SSO for JIRA test user](#create-kantega-sso-for-jira-test-user)** - to have a counterpart of B.Simon in Kantega SSO for JIRA linked to the Azure AD representation of the user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<server-base-url>/plugins/servlet/no.kantega.saml/sp/<UNIQUE_ID>/login` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. These values are received during the configuration of Jira plugin, which is explained later in the tutorial.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-On URL. These values are received during the configuration of the Jira plugin.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
1. In the **User** properties, follow these steps: 1. In the **Name** field, enter `B.Simon`. 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select the **Show password** check box, and then write down the displayed value in the **Password** box.
1. Click **Create**. ### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you expect a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see the "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Kantega SSO for JIRA SSO
-1. In a different web browser window, sign in to your JIRA on-premises server as an administrator.
-
-1. However on cog and click the **Add-ons**.
-
- ![Screenshot that shows the "Cog" icon selected and "Add-ons" selected from the drop-down.](./media/kantegassoforjira-tutorial/settings.png)
-
-1. Under Add-ons tab section, click **Find new add-ons**. Search **Kantega SSO for JIRA (SAML & Kerberos)** and click **Install** button to install the new SAML plugin.
-
- ![Screenshot that shows the "Find new Add-ons" section with "Kantego S S O for JIRA (S A M L & Kerberos)" in the search box and the "Install" button selected.](./media/kantegassoforjira-tutorial/install-tab.png)
-
-1. The plugin installation starts.
-
- ![Screenshot that shows the plugin "Installing" dialog.](./media/kantegassoforjira-tutorial/installation.png)
-
-1. Once the installation is complete. Click **Close**.
-
- ![Screenshot that shows the "Installed and ready to go!" dialog with the "Close" action selected.](./media/kantegassoforjira-tutorial/close-tab.png)
-
-1. Click **Manage**.
-
- ![Screenshot that shows the "Kantega S S O" app page with the "Manage" button selected.](./media/kantegassoforjira-tutorial/manage-tab.png)
-
-1. New plugin is listed under **INTEGRATIONS**. Click **Configure** to configure the new plugin.
-
- ![Screenshot that shows "INTEGRATIONS" in the left-side navigation menu highlighted and the "Configure" button selected in the "Manage add-ons" section.](./media/kantegassoforjira-tutorial/integration.png)
-
-1. In the **SAML** section. Select **Azure Active Directory (Azure AD)** from the **Add identity provider** dropdown.
-
- ![Screenshot that shows the "Add identity provider" drop-down with "Azure Active Directory (Azure A D)" selected.](./media/kantegassoforjira-tutorial/identity-provider.png)
-
-1. Select subscription level as **Basic**.
-
- ![Screenshot that shows the "Preparing Azure A D" section with "Basic" selected.](./media/kantegassoforjira-tutorial/basic-tab.png)
-
-1. On the **App properties** section, perform following steps:
-
- ![Screenshot that shows the "App properties" section with the "App I D U R L" textbox and copy button highlighted, and the "Next" button selected.](./media/kantegassoforjira-tutorial/properties.png)
-
- 1. Copy the **App ID URI** value and use it as **Identifier, Reply URL, and Sign-On URL** on the **Basic SAML Configuration** section in Azure portal.
-
- 1. Click **Next**.
-
-1. On the **Metadata import** section, perform following steps:
-
- ![Screenshot that shows the "Metadata import" section with "Metadata file on my computer" selected.](./media/kantegassoforjira-tutorial/metadata.png)
-
- 1. Select **Metadata file on my computer**, and upload metadata file, which you have downloaded from Azure portal.
-
- 1. Click **Next**.
-
-1. On the **Name and SSO location** section, perform following steps:
-
- ![Screenshot that shows the "Name and S S O location" with the "Identity provider name" textbox highlighted, and the "Next" button selected.](./media/kantegassoforjira-tutorial/location.png)
+Kantega SSO can be configured to use either SAML or OIDC as SSO protocol. Choose one of the following guides:
- 1. Add Name of the Identity Provider in **Identity provider name** textbox (e.g Azure AD).
-
- 1. Click **Next**.
-
-1. Verify the Signing certificate and click **Next**.
-
- ![Screenshot that shows the "Signature verification" section with the "Next" button selected.](./media/kantegassoforjira-tutorial/certificate.png)
-
-1. On the **JIRA user accounts** section, perform following steps:
-
- ![Screenshot that shows the "JIRA user accounts" with the "Create users in JIRA's Internal Directory if needed" option highlighted and the "Next" button selected.](./media/kantegassoforjira-tutorial/accounts.png)
-
- 1. Select **Create users in JIRA's internal Directory if needed** and enter the appropriate name of the group for users (can be multiple no. of groups separated by comma).
-
- 1. Click **Next**.
-
-1. Click **Finish**.
-
- ![Screenshot that shows the "Summary" section with teh "Finish" button selected.](./media/kantegassoforjira-tutorial/finish-tab.png)
-
-1. On the **Known domains for Azure AD** section, perform following steps:
-
- ![Configure Single Sign-On](./media/kantegassoforjira-tutorial/save-tab.png)
-
- 1. Select **Known domains** from the left panel of the page.
-
- 2. Enter domain name in the **Known domains** textbox.
-
- 3. Click **Save**.
+* [Kantega SSO setup guide for Azure AD with SAML](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/896696394/Azure+AD+SAML)
+* [Kantega SSO setup guide for Azure AD with OIDC](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/896598077/Azure+AD+OIDC)
### Create Kantega SSO for JIRA test user
-To enable Azure AD users to sign in to JIRA, they must be provisioned into JIRA. In Kantega SSO for JIRA, provisioning is a manual task.
-
-**To provision a user account, perform the following steps:**
-
-1. Sign in to your JIRA on-premises server as an administrator.
-
-1. Hover on cog and click the **User management**.
-
- ![Screenshot that shows the "Cog" icon selected, and "User management" selected from the drop-down.](./media/kantegassoforjira-tutorial/user.png)
-
-1. Under **User management** tab section, click **Create user**.
-
- ![Screenshot that shows the "User management" section with the "Create user" button selected.](./media/kantegassoforjira-tutorial/create-user.png)
-
-1. On the **ΓÇ£Create new userΓÇ¥** dialog page, perform the following steps:
-
- ![Add Employee](./media/kantegassoforjira-tutorial/new-user.png)
-
- 1. In the **Email address** textbox, type the email address of user like Brittasimon@contoso.com.
-
- 2. In the **Full Name** textbox, type full name of the user like Britta Simon.
-
- 3. In the **Username** textbox, type the email of user like Brittasimon@contoso.com.
-
- 4. In the **Password** textbox, type the password of user.
-
- 5. Click **Create user**.
+To enable Azure AD users to sign in to Kantega SSO for JIRA, you must provision them. The application supports Just-in-Time user provisioning, automatic user provisioning using SCIM, or you can set up users manually. Read more about the [different provisioning options](https://kantega-sso.atlassian.net/wiki/spaces/KSE/pages/1769694/User+provisioning).
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with the following options.
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Kantega SSO for JIRA Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in the Azure portal. This will redirect to Kantega SSO for JIRA Sign-on URL, where you can initiate the login flow.
-* Go to Kantega SSO for JIRA Sign-on URL directly and initiate the login flow from there.
+* Go to Kantega SSO for JIRA Sign-on URL directly and initiate the login flow.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Kantega SSO for JIRA for which you set up the SSO.
+* Click on **Test this application** in the Azure portal, and you should be automatically signed in to the Kantega SSO for JIRA, for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Kantega SSO for JIRA tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kantega SSO for JIRA for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Kantega SSO for JIRA tile in the My Apps, you will be redirected to the application sign-on page for initiating the login flow if configured in SP mode. If configured in IDP mode, you should be automatically signed in to the Kantega SSO for JIRA, for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Kantega SSO for JIRA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Kantega SSO for JIRA, you can enforce session control, which protects the exfiltration and infiltration of your organization's sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Klaxoon and Azure
> * Disable users in Klaxoon when they do not require access anymore. > * Keep user attributes synchronized between Azure AD and Klaxoon. > * Provide licenses to users in Klaxoon based on Azure AD Groups.
-> * [Single sign-on](klaxoon-saml-tutorial.md) to Klaxoon (recommended).
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Klaxoon (recommended).
## Prerequisites
active-directory Knowbe4 Security Awareness Training Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure KnowBe4 Security Awareness Training for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to KnowBe4 Security Awareness Training.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: e71f7de4-33d0-46cc-85c9-29f24c3e1a25
+++
+ na
+ Last updated : 04/06/2022+++
+# Tutorial: Configure KnowBe4 Security Awareness Training for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both KnowBe4 Security Awareness Training and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [KnowBe4 Security Awareness Training](https://www.knowbe4.com/) using the Azure AD provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in KnowBe4 Security Awareness Training.
+> * Remove users in KnowBe4 Security Awareness Training when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and KnowBe4 Security Awareness Training.
+> * Provision groups and group memberships in KnowBe4 Security Awareness Training.
+> * [Single sign-on](knowbe4-tutorial.md) to KnowBe4 Security Awareness Training (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application administrator, Cloud Application administrator, Application Owner, or Global administrator).
+* A user account in KnowBe4 Security Awareness Training with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and KnowBe4 Security Awareness Training](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure KnowBe4 Security Awareness Training to support provisioning with Azure AD
+Follow the steps below to configure your SCIM settings in the console.
+>[!NOTE]
+>If you are switching from ADI to SCIM, please note that if you are using alias email addresses, our integration with SCIM does not support that connection, so this information will be removed once you disable **Test Mode** and a sync runs.
+
+1. From your KnowBe4 console, click your email address in the top right corner and select **Account Settings**.
+1. Navigate to the **User Management > User Provisioning** section of your settings.
+1. Select **Enable User Provisioning (User Syncing)** to display more provisioning settings.
+
+ ![User Provisioning (User Syncing)](media/knowbe4-security-awareness-training-provisioning-tutorial\user-sync.png)
+
+1. By default, the toggle will be set to **ADI**. Click the **SCIM** toggle to begin setting up.
+1. Expand your SCIM settings by clicking **+ SCIM Settings**.
+
+ ![Tenant Url](media/knowbe4-security-awareness-training-provisioning-tutorial\tenant-url.png)
+
+1. Click **Generate SCIM Token**. This will open a new window with your token ID. Copy this ID and save it to a place that you can easily access later. It is important that you save this token because once you close this window, you cannot view the token again. Once youΓÇÖve saved the information, click **OK** to close the window.
+
+ >[!NOTE]
+ >Once your SCIM token is generated, this button will change to the **Regenerate SCIM Token** button. See the **Troubleshooting Tips** section of this article for more information.
+
+ >[!NOTE]
+ >Your identity provider will need the token (step 5) and the tenant ID (step 6) in order to establish a connection with KnowBe4. Make sure that you save this information so it is readily available when you are ready to set up the connection with your identity provider.
+
+1. Copy the Tenant URL and save it to a place that you can easily access later.
+1. Make sure that the Test Mode option is selected.
+
+ ![Tenant Mode](media/knowbe4-security-awareness-training-provisioning-tutorial\test-mode.png)
+
+ >[!NOTE]
+ >We recommend keeping **Test Mode** enabled until youΓÇÖve configured the connection between KnowBe4 and your identity provider and have run a successful sync. Test Mode is used to generate a report of what will happen when SCIM is enabled. This means no changes are made to your console so you can configure your setup without worrying about changes to your console. When you are ready, you can disable **Test Mode** from your **Account Settings** to enable syncing.If you are switching from ADI to SCIM, **Test Mode** will be enabled automatically after you save your **Account Settings**.
+
+1. Scroll down to the bottom of the **Account Settings** page and click **Save Changes**.
+Now that you have enabled SCIM in your KnowBe4 account, you are ready to finalize the connection with your identity provider. See one of the articles below to find instructions on configuring SCIM for the identity provider that you are using.
+
+## Step 3. Add KnowBe4 Security Awareness Training from the Azure AD application gallery
+
+Add KnowBe4 Security Awareness Training from the Azure AD application gallery to start managing provisioning to KnowBe4 Security Awareness Training. If you have previously setup KnowBe4 Security Awareness Training for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to KnowBe4 Security Awareness Training
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in KnowBe4 Security Awareness Training based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for KnowBe4 Security Awareness Training in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **KnowBe4 Security Awareness Training**.
+
+ ![The KnowBe4 Security Awareness Training link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your KnowBe4 Security Awareness Training Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to KnowBe4 Security Awareness Training. If the connection fails, ensure your KnowBe4 Security Awareness Training account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to KnowBe4 Security Awareness Training**.
+
+1. Review the user attributes that are synchronized from Azure AD to KnowBe4 Security Awareness Training in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in KnowBe4 Security Awareness Training for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the KnowBe4 Security Awareness Training API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by KnowBe4 Security Awareness Training|
+ |||||
+ |userName|String|&check;|&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager.value|Reference||
+ |active|Boolean||
+ |title|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+ |displayName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customDate1|DateTime||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customDate2|DateTime|
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField1|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField2|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField3|String||
+ |urn:ietf:params:scim:schemas:extension:knowbe4:kmsat:2.0:User:customField4|String||
+|
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to KnowBe4 Security Awareness Training**.
+
+1. Review the group attributes that are synchronized from Azure AD to KnowBe4 Security Awareness Training in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in KnowBe4 Security Awareness Training for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by KnowBe4 Security Awareness Training|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for KnowBe4 Security Awareness Training, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to KnowBe4 Security Awareness Training by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Step 7. Troubleshooting Tips
+* Once SCIM has been enabled, you'll see three buttons in the SCIM section of your Account Settings that can be used for troubleshooting purposes. For more information on these options, see the list below.
+
+ ![Troubleshooting Tips](media/knowbe4-security-awareness-training-provisioning-tutorial\troubleshoot.png)
+
+ * **Regenerate SCIM token**: Use this button to generate a new SCIM token. This token can only be viewed once, so make sure you save this information before closing the window. The link between your identity providers and your KnowBe4 console will be disabled until you provide the new SCIM token.
+
+ * **Revoke SCIM token**: Use this button to disable your current SCIM token. Identity providers currently using this token will no longer be linked to your KnowBe4 console.
+
+ * **Force Sync Now**: Use this button to manually force a SCIM sync at any time, without requiring a change from your identity provider.
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
For apps using [legacy authentication protocols](../fundamentals/auth-sync-overv
* [Use Azure AD Application Proxy or Secure hybrid partner access](../manage-apps/secure-hybrid-access.md) to provide secure access.
-* Decommission access to apps that are no longer needed, or are not supported (for example, apps added by shadow IT processes). Also have an alarm for
+* Decommission access to apps that are no longer needed, or are not supported (for example, apps added by shadow IT processes).
## Connect devices
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Learn more about using Container insights at [Container insights overview](../az
## Configure monitoring The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor. ### Create Log Analytics workspace
-You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details.
+You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details.
If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data.
The logs for AKS control plane components are implemented in Azure as [resource
You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
-There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
You can now update an AKS cluster currently working with service principals to w
```azurecli-interactive az aks update -g <RGName> -n <AKSName> --enable-managed-identity ```
+> [!NOTE]
+> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you will need to wait till the next VHD is available in order to do the actual update.
+>
+ > [!NOTE] > After updating, your cluster's control plane and addon pods will switch to use managed identity, but kubelet will KEEP USING SERVICE PRINCIPAL until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity. >
app-service Overview Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-zone-redundancy.md
Title: Zone redundancy in App Service Environment
description: Overview of zone redundancy in an App Service Environment. Previously updated : 11/15/2021 Last updated : 04/06/2022
You can deploy App Service Environment across [availability zones](../../availab
You configure zone redundancy when you create your App Service Environment, and all App Service plans created in that App Service Environment will be zone redundant. You can only specify zone redundancy when you're creating a new App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
-When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have autoscale configured, and if it determines that more instances are needed, autoscale also issues a request to App Service to add more instances. Autoscale behavior is independent of App Service platform behavior.
+When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have auto-scale configured, and if it determines that more instances are needed, auto-scale also issues a request to App Service to add more instances. Auto-scale behavior is independent of App Service platform behavior.
There's no guarantee that requests for instances in a zone-down scenario will succeed, because back-filling lost instances occurs on a best effort basis. It's a good idea to scale your App Service plans to account for losing a zone.
-Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include the following: App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
+Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
-When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- one instance in all of the other zones used by the App Service plan.
+
+## In-region data residency
+
+A zone redundant App Service Environment will only store customer data within the region where it has been deployed. Both app content, settings and secrets stored in App Service remain within the region where the zone redundant App Service Environment is deployed.
## Pricing
- There is a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There is no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This charge is for additional Windows I1v2 instances.
+ There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There's no added charge for availability zone support if you've nine or more instances. If you've fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This difference is billed as Windows I1v2 instances.
## Next steps
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md
AzureDiagnostics
* To understand creation and retrieval of output and error messages from runbooks, see [Monitor runbook output](automation-runbook-output-and-messages.md). * To learn more about runbook execution, how to monitor runbook jobs, and other technical details, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To learn more about Azure Monitor logs and data collection sources, see [Collecting Azure storage data in Azure Monitor logs overview](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/manage-cost-storage.md#troubleshooting-why-log-analytics-is-no-longer-collecting-data).
+* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/data-collection-troubleshoot.md).
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/configure-alerts.md
Once you have your alerts configured, you can set up an action group, which is a
* Learn about [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace.
-* Manage [usage and costs with Azure Monitor Logs](../../azure-monitor/logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
+* [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md) describes how to analyze and alert on your data usage.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Change Tracking and Inventory makes use of [Microsoft Defender for Cloud File In
Enabling all features included in Change Tracking and Inventory might cause additional charges. Before proceeding, review [Automation Pricing](https://azure.microsoft.com/pricing/details/automation/) and [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/).
-Change Tracking and Inventory forwards data to Azure Monitor Logs, and this collected data is stored in a Log Analytics workspace. The File Integrity Monitoring (FIM) feature is available only when **Microsoft Defender for servers** is enabled. See Microsoft Defender for Cloud [Pricing](../../security-center/security-center-pricing.md) to learn more. FIM uploads data to the same Log Analytics workspace as the one created to store data from Change Tracking and Inventory. We recommend that you monitor your linked Log Analytics workspace to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Manage usage and cost](../../azure-monitor/logs/manage-cost-storage.md).
+Change Tracking and Inventory forwards data to Azure Monitor Logs, and this collected data is stored in a Log Analytics workspace. The File Integrity Monitoring (FIM) feature is available only when **Microsoft Defender for servers** is enabled. See Microsoft Defender for Cloud [Pricing](../../security-center/security-center-pricing.md) to learn more. FIM uploads data to the same Log Analytics workspace as the one created to store data from Change Tracking and Inventory. We recommend that you monitor your linked Log Analytics workspace to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md).
Machines connected to the Log Analytics workspace use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
The following table shows the tracked item limits per machine for Change Trackin
|Services|250| |Daemons|250|
-The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs).
+The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/logs/usage-estimated-costs.md#Understand your usage and optimize your pricing tier).
### Windows services data
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-alerts.md
Once you have your alerts configured, you can set up an action group, which is a
* Learn about [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace.
-* Manage [usage and costs with Azure Monitor Logs](../../azure-monitor/logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
+* [Azure Monitor best practices - Cost management](../../azure-monitor/best-practices-cost.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Update Management scans managed machines for data using the following rules. It
* Each Linux machine - Update Management does a scan every hour.
-The average data usage by Azure Monitor logs for a machine using Update Management is approximately 25 MB per month. This value is only an approximation and is subject to change, depending on your environment. We recommend that you monitor your environment to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Manage usage and cost](../../azure-monitor/logs/manage-cost-storage.md).
+The average data usage by Azure Monitor logs for a machine using Update Management is approximately 25 MB per month. This value is only an approximation and is subject to change, depending on your environment. We recommend that you monitor your environment to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Azure Monitor Logs pricing details](../../azure-monitor/logs/cost-logs.md).
## Update classifications
azure-arc Create Data Controller Indirect Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md
You can create a data controller using Azure Data Studio through the deployment
- You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to. - You need to [install the client tools](install-client-tools.md) including **Azure Data Studio**, the Azure Data Studio extensions called **Azure Arc** and Azure CLI with the `arcdata` extension. - You need to log in to Azure in Azure Data Studio. To do this: type CTRL/Command + SHIFT + P to open the command text window and type **Azure**. Choose **Azure: Sign in**. In the panel, that comes up click the + icon in the top right to add an Azure account.
+- You need to run `az login` in your local Command Prompt to login to Azure CLI.
## Use the Deployment Wizard to create Azure Arc data controller
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
You can deploy Azure Arc-enabled data services on various types of Kubernetes cl
- OpenShift Container Platform (OCP) > [!IMPORTANT]
-> * The minimum supported version of Kubernetes is v1.19. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
-> * The minimum supported version of OCP is 4.7.
+> * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
+> * The minimum supported version of OCP is 4.8.
> * If you're using Azure Kubernetes Service, your cluster's worker node virtual machine (VM) size should be at least Standard_D8s_v3 and use Premium Disks. > * The cluster should not span multiple availability zones. > * For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
## Enable CLI extensions >[!NOTE]
->The `k8s-configuration` CLI extension has been upgraded to manage either Flux v2 or Flux v1 configurations. Flux v2 is an important upgrade to Flux v1, and eventually GitOps support for Flux v1 will cease. Begin using Flux v2 as soon as possible.
+>The `k8s-configuration` CLI extension has been upgraded to manage either Flux v2 or Flux v1 configurations. Flux v2 is an important upgrade to Flux v1, and eventually Azure will stop supporting GitOps with Flux v1. Begin using Flux v2 as soon as possible.
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
az extension list -o table
Experimental ExtensionType Name Path Preview Version - -- -- -- -- --
-False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.0
-False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.4.1
-False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.0.4
+False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.7
+False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.5.0
+False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.1.0
``` > [!TIP]
False whl k8s-extension C:\Users\somename\.azure\c
## Apply a Flux configuration by using the Azure CLI
-Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [flux2-kustomize-helm-example](https://github.com/fluxcd/flux2-kustomize-helm-example) repository.
+Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository.
In the following example: * The resource group that contains the cluster is `flux-demo-rg`. * The name of the Azure Arc cluster is `flux-demo-arc`. * The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`).
-* The name of the Flux configuration is `gitops-demo`.
-* The namespace for configuration installation is `gitops-demo`.
-* The URL for the public Git repository is `https://github.com/fluxcd/flux2-kustomize-helm-example`.
+* The name of the Flux configuration is `cluster-config`.
+* The namespace for configuration installation is `cluster-config`.
+* The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`.
* The Git repository branch is `main`. * The scope of the configuration is `cluster`. It gives the operators permissions to make changes throughout cluster. * Two kustomizations are specified with names `infra` and `apps`. Each is associated with a path in the repository. * The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) * Set `prune=true` on both kustomizations. This setting assures that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted.
-If the `microsoft.flux` extension isn't already installed in the cluster, it will be installed.
+If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute you can query the configuration again and see the final compliance state.
```console
-az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n gitops-demo --namespace gitops-demo -t connectedClusters --scope cluster -u https://github.com/fluxcd/flux2-kustomize-helm-example --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
+az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n cluster-config --namespace cluster-config -t connectedClusters --scope cluster -u https://github.com/Azure/gitops-flux2-kustomize-helm-mt --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
-Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-Warning! https url is being used without https auth params, ensure the repository url provided is not a private repo
'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes... 'Microsoft.Flux' extension was successfully installed on the cluster
-Creating the flux configuration 'gitops-demo' in the cluster. This may take a few minutes...
+Creating the flux configuration 'cluster-config' in the cluster. This may take a few minutes...
{ "complianceState": "Pending", ... (not shown because of pending status) } ```
-Show the configuration after time to finish reconciliations.
+Show the configuration after allowing time to finish reconciliations.
```console
-az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters
-
-Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters
{
+ "bucket": null,
"complianceState": "Compliant", "configurationProtectedSettings": {}, "errorMessage": "", "gitRepository": {
- "httpsCaFile": null,
+ "httpsCaCert": null,
"httpsUser": null, "localAuthRef": null, "repositoryRef": {
Command group 'k8s-configuration flux' is in preview and under development. Refe
"sshKnownHosts": null, "syncIntervalInSeconds": 600, "timeoutInSeconds": 600,
- "url": "https://github.com/fluxcd/flux2-kustomize-helm-example"
+ "url": "https://github.com/Azure/gitops-flux2-kustomize-helm-mt"
},
- "id": "/subscriptions/REDACTED/resourceGroups/flux-demo-rg/providers/Microsoft.Kubernetes/connectedClusters/flux-demo-arc/providers/Microsoft.KubernetesConfiguration/fluxConfigurations/gitops-demo",
+ "id": "/subscriptions/REDACTED/resourceGroups/flux-demo-rg/providers/Microsoft.Kubernetes/connectedClusters/flux-demo-arc/providers/Microsoft.KubernetesConfiguration/fluxConfigurations/cluster-config",
"kustomizations": { "apps": { "dependsOn": [
- {
- "kustomizationName": "infra"
- }
+ "infra"
], "force": false,
+ "name": "apps",
"path": "./apps/staging", "prune": true, "retryIntervalInSeconds": null,
Command group 'k8s-configuration flux' is in preview and under development. Refe
"timeoutInSeconds": 600 }, "infra": {
- "dependsOn": [],
+ "dependsOn": null,
"force": false,
+ "name": "infra",
"path": "./infrastructure", "prune": true, "retryIntervalInSeconds": null,
Command group 'k8s-configuration flux' is in preview and under development. Refe
"timeoutInSeconds": 600 } },
- "lastSourceSyncedAt": "2021-11-23T22:59:22+00:00",
- "lastSourceSyncedCommitId": "main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
- "name": "gitops-demo",
- "namespace": "gitops-demo",
+ "name": "cluster-config",
+ "namespace": "cluster-config",
"provisioningState": "Succeeded", "repositoryPublicKey": "",
- "resourceGroup": "flux-demo-rg",
+ "resourceGroup": "Flux2-Test-RG-EUS",
"scope": "cluster", "sourceKind": "GitRepository",
+ "sourceSyncedCommitId": "main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
+ "sourceUpdatedAt": "2022-04-06T17:34:03+00:00",
+ "statusUpdatedAt": "2022-04-06T17:44:56.417000+00:00",
"statuses": [ { "appliedBy": null, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "GitRepository",
- "name": "gitops-demo",
- "namespace": "gitops-demo",
+ "name": "cluster-config",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:22+00:00",
- "message": "Fetched revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:33:32+00:00",
+ "message": "Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "GitOperationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
"complianceState": "Compliant", "helmReleaseProperties": null, "kind": "Kustomization",
- "name": "gitops-demo-apps",
- "namespace": "gitops-demo",
+ "name": "cluster-config-apps",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:53+00:00",
- "message": "Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:44:04+00:00",
+ "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "ReconciliationSucceeded", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-apps",
- "namespace": "gitops-demo"
+ "name": "cluster-config-apps",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "podinfo-podinfo",
- "namespace": "flux-system"
+ "name": "cluster-config-podinfo",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "podinfo",
- "namespace": "podinfo",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:54+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:43+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T22:59:54+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:43+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True",
Command group 'k8s-configuration flux' is in preview and under development. Refe
"complianceState": "Compliant", "helmReleaseProperties": null, "kind": "Kustomization",
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo",
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:24+00:00",
- "message": "Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561",
+ "lastTransitionTime": "2022-04-06T17:43:33+00:00",
+ "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
"reason": "ReconciliationSucceeded", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "HelmRepository", "name": "bitnami",
- "namespace": "flux-system",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:30+00:00",
- "message": "Fetched revision: 75dd8746b22e569460eb3b453b0ae22941c680b7",
+ "lastTransitionTime": "2022-04-06T17:33:36+00:00",
+ "message": "Fetched revision: 46a41610ea410558eb485bcb673fd01c4d1f47b86ad292160b256555b01cce81",
"reason": "IndexationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": null, "kind": "HelmRepository", "name": "podinfo",
- "namespace": "flux-system",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:24+00:00",
- "message": "Fetched revision: fddc2924c28a1a1895e215a4dc065f33a0ea2e8e",
+ "lastTransitionTime": "2022-04-06T17:33:33+00:00",
+ "message": "Fetched revision: 421665ba04fab9b275b9830947417b2cebf67764eee46d568c94cf2a95a6341d",
"reason": "IndexationSucceed", "status": "True", "type": "Ready"
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "nginx-nginx",
- "namespace": "flux-system"
+ "name": "cluster-config-nginx",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "nginx",
- "namespace": "nginx",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T23:00:10+00:00",
+ "lastTransitionTime": "2022-04-06T17:34:13+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T23:00:10+00:00",
+ "lastTransitionTime": "2022-04-06T17:34:13+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True",
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, { "appliedBy": {
- "name": "gitops-demo-infra",
- "namespace": "gitops-demo"
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
}, "complianceState": "Compliant", "helmReleaseProperties": { "failureCount": 0, "helmChartRef": {
- "name": "redis-redis",
- "namespace": "flux-system"
+ "name": "cluster-config-redis",
+ "namespace": "cluster-config"
}, "installFailureCount": 0, "lastRevisionApplied": 1,
Command group 'k8s-configuration flux' is in preview and under development. Refe
}, "kind": "HelmRelease", "name": "redis",
- "namespace": "redis",
+ "namespace": "cluster-config",
"statusConditions": [ {
- "lastTransitionTime": "2021-11-23T22:59:56+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:57+00:00",
"message": "Release reconciliation succeeded", "reason": "ReconciliationSucceeded", "status": "True", "type": "Ready" }, {
- "lastTransitionTime": "2021-11-23T22:59:56+00:00",
+ "lastTransitionTime": "2022-04-06T17:33:57+00:00",
"message": "Helm install succeeded", "reason": "InstallSucceeded", "status": "True", "type": "Released" } ]
+ },
+ {
+ "appliedBy": {
+ "name": "cluster-config-infra",
+ "namespace": "cluster-config"
+ },
+ "complianceState": "Compliant",
+ "helmReleaseProperties": null,
+ "kind": "HelmChart",
+ "name": "test-chart",
+ "namespace": "cluster-config",
+ "statusConditions": [
+ {
+ "lastTransitionTime": "2022-04-06T17:33:40+00:00",
+ "message": "Pulled 'redis' chart with version '11.3.4'.",
+ "reason": "ChartPullSucceeded",
+ "status": "True",
+ "type": "Ready"
+ }
+ ]
} ], "suspend": false, "systemData": {
- "createdAt": "2021-11-23T22:58:53.736245+00:00",
+ "createdAt": "2022-04-06T17:32:44.646629+00:00",
"createdBy": null, "createdByType": null,
- "lastModifiedAt": "2021-11-23T22:58:53.736245+00:00",
+ "lastModifiedAt": "2022-04-06T17:32:44.646629+00:00",
"lastModifiedBy": null, "lastModifiedByType": null },
Command group 'k8s-configuration flux' is in preview and under development. Refe
These namespaces were created: * `flux-system`: Holds the Flux extension controllers.
-* `gitops-demo`: Holds the Flux configuration objects.
+* `cluster-config`: Holds the Flux configuration objects.
* `nginx`, `podinfo`, `redis`: Namespaces for workloads described in manifests in the Git repository. ```console kubectl get namespaces
-NAME STATUS AGE
-azure-arc Active 17d
-default Active 17d
-flux-system Active 18m
-gitops-demo Active 17m
-kube-node-lease Active 17d
-kube-public Active 17d
-kube-system Active 17d
-nginx Active 17m
-podinfo Active 16m
-redis Active 17m
``` The `flux-system` namespace contains the Flux extension objects:
notification-controller-7d45678bc-fvlvr 1/1 Running 0 21m
source-controller-df7dc97cd-4drh2 1/1 Running 0 21m ```
-The namespace `gitops-demo` has the Flux configuration objects.
+The namespace `cluster-config` has the Flux configuration objects.
```console kubectl get crds NAME CREATED AT
-alerts.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-arccertificates.clusterconfig.azure.com 2021-11-06T15:12:36Z
-azureclusteridentityrequests.clusterconfig.azure.com 2021-11-06T15:12:36Z
-connectedclusters.arc.azure.com 2021-11-06T15:12:36Z
-customlocationsettings.clusterconfig.azure.com 2021-11-06T15:12:36Z
-extensionconfigs.clusterconfig.azure.com 2021-11-06T15:12:36Z
-fluxconfigs.clusterconfig.azure.com 2021-11-23T22:57:49Z
-gitconfigs.clusterconfig.azure.com 2021-11-06T15:12:36Z
-gitrepositories.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-healthstates.azmon.container.insights 2021-11-06T14:45:55Z
-helmcharts.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-helmreleases.helm.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-helmrepositories.source.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-kustomizations.kustomize.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-providers.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
-receivers.notification.toolkit.fluxcd.io 2021-11-23T22:57:49Z
+alerts.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+arccertificates.clusterconfig.azure.com 2022-03-28T21:45:19Z
+azureclusteridentityrequests.clusterconfig.azure.com 2022-03-28T21:45:19Z
+azureextensionidentities.clusterconfig.azure.com 2022-03-28T21:45:19Z
+buckets.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+connectedclusters.arc.azure.com 2022-03-28T21:45:19Z
+customlocationsettings.clusterconfig.azure.com 2022-03-28T21:45:19Z
+extensionconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z
+fluxconfigs.clusterconfig.azure.com 2022-04-06T17:15:48Z
+gitconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z
+gitrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmcharts.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmreleases.helm.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+helmrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imagepolicies.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imagerepositories.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+imageupdateautomations.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+kustomizations.kustomize.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+providers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+receivers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z
+volumesnapshotclasses.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+volumesnapshotcontents.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+volumesnapshots.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
+websites.extensions.example.com 2022-03-30T23:42:32Z
``` ```console kubectl get fluxconfigs -A
-NAMESPACE NAME SCOPE URL PROVISION AGE
-gitops-demo gitops-demo cluster https://github.com/fluxcd/flux2-kustomize-helm-example Succeeded 22m
+NAMESPACE NAME SCOPE URL PROVISION AGE
+cluster-config cluster-config cluster https://github.com/Azure/gitops-flux2-kustomize-helm-mt Succeeded 44m
``` ```console kubectl get gitrepositories -A
-NAMESPACE NAME URL READY STATUS AGE
-gitops-demo gitops-demo https://github.com/fluxcd/flux2-kustomize-helm-example True Fetched revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 22m
+NAMESPACE NAME URL READY STATUS AGE
+cluster-config cluster-config https://github.com/Azure/gitops-flux2-kustomize-helm-mt True Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 45m
``` ```console kubectl get helmreleases -A
-NAMESPACE NAME READY STATUS AGE
-nginx nginx True Release reconciliation succeeded 6d4h
-podinfo podinfo True Release reconciliation succeeded 6d4h
-redis redis True Release reconciliation succeeded 6d4h
+NAMESPACE NAME READY STATUS AGE
+cluster-config nginx True Release reconciliation succeeded 66m
+cluster-config podinfo True Release reconciliation succeeded 66m
+cluster-config redis True Release reconciliation succeeded 66m
``` ```console kubectl get kustomizations -A
-NAMESPACE NAME READY STATUS AGE
-gitops-demo gitops-demo-apps True Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 23m
-gitops-demo gitops-demo-infra True Applied revision: main/f0c2aaef48461d8099a8fff05893e9ebb96f1561 23m
+
+NAMESPACE NAME READY STATUS AGE
+cluster-config cluster-config-apps True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m
+cluster-config cluster-config-infra True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m
``` Workloads are deployed from manifests in the Git repository.
Workloads are deployed from manifests in the Git repository.
kubectl get deploy -n nginx NAME READY UP-TO-DATE AVAILABLE AGE
-nginx-ingress-controller 1/1 1 1 25m
-nginx-ingress-controller-default-backend 1/1 1 1 25m
+nginx-ingress-controller 1/1 1 1 67m
+nginx-ingress-controller-default-backend 1/1 1 1 67m
kubectl get deploy -n podinfo NAME READY UP-TO-DATE AVAILABLE AGE
-podinfo 1/1 1 1 26m
+podinfo 1/1 1 1 68m
kubectl get all -n redis NAME READY STATUS RESTARTS AGE
-pod/redis-master-0 1/1 Running 0 95m
+pod/redis-master-0 1/1 Running 0 68m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/redis-headless ClusterIP None <none> 6379/TCP 95m
-service/redis-master ClusterIP 10.0.180.63 <none> 6379/TCP 95m
+service/redis-headless ClusterIP None <none> 6379/TCP 68m
+service/redis-master ClusterIP 10.0.13.182 <none> 6379/TCP 68m
NAME READY AGE
-statefulset.apps/redis-master 1/1 95m
+statefulset.apps/redis-master 1/1 68m
``` ### Delete the Flux configuration
statefulset.apps/redis-master 1/1 95m
You can delete the Flux configuration by using the following command. This action deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed. ```console
-az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters --yes
+az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters --yes
```
+For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
+ Note that this action does *not* remove the Flux extension. ### Delete the Flux cluster extension
az k8s-configuration flux -h
Group az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations.
- This command group is in preview and under development. Reference and support levels:
- https://aka.ms/CLI_refstatus
+ Subgroups: deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes configurations.
Subgroups:
configurations. Commands:
- create : Create a Flux v2 Kubernetes configuration.
- delete : Delete a Flux v2 Kubernetes configuration.
- list : List all Flux v2 Kubernetes configurations.
- show : Show a Flux v2 Kubernetes configuration.
- update : Update a Flux v2 Kubernetes configuration.
+ create : Create a Flux v2 Kubernetes configuration.
+ delete : Delete a Flux v2 Kubernetes configuration.
+ list : List all Flux v2 Kubernetes configurations.
+ show : Show a Flux v2 Kubernetes configuration.
+ update : Update a Flux v2 Kubernetes configuration.
``` Here are the parameters for the `k8s-configuration flux create` CLI command:
This command is from the following extension: k8s-configuration
Command az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration.
- Command group 'k8s-configuration flux' is in preview and under development. Reference
- and support levels: https://aka.ms/CLI_refstatus
+ Arguments --cluster-name -c [Required] : Name of the Kubernetes cluster. --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
Arguments
--timeout : Maximum time to reconcile the source before timing out. Auth Arguments
- --local-auth-ref --local-ref : Local reference to a Kubernetes secret in the configuration
+ --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
namespace to use for communication to the source. Bucket Auth Arguments
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Azure Functions integrates with Application Insights to better enable you to monitor your function apps. Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service that collects data generated by your function app, including information your app writes to logs. Application Insights integration is typically enabled when your function app is created. If your app doesn't have the instrumentation key set, you must first [enable Application Insights integration](#enable-application-insights-integration).
-You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see [Manage usage and costs for Application Insights](../azure-monitor/app/pricing.md). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
+You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. For a function app, logging is configured in the [host.json] file.
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
As Application Insights instrumentation is built into Azure Functions, you need
You can try out Application Insights integration with Azure Functions for free featuring a daily limit to how much data is processed for free.
-If you enable Applications Insights during development, you might hit this limit during testing. Azure provides portal and email notifications when you're approaching your daily limit. If you miss those alerts and hit the limit, new logs won't appear in Application Insights queries. Be aware of the limit to avoid unnecessary troubleshooting time. For more information, see [Manage pricing and data volume in Application Insights](../azure-monitor/app/pricing.md).
+If you enable Applications Insights during development, you might hit this limit during testing. Azure provides portal and email notifications when you're approaching your daily limit. If you miss those alerts and hit the limit, new logs won't appear in Application Insights queries. Be aware of the limit to avoid unnecessary troubleshooting time. For more information, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
> [!IMPORTANT] > Application Insights has a [sampling](../azure-monitor/app/sampling.md) feature that can protect you from producing too much telemetry data on completed executions at times of peak load. Sampling is enabled by default. If you appear to be missing data, you might need to adjust the sampling settings to fit your particular monitoring scenario. To learn more, see [Configure sampling](configure-monitoring.md#configure-sampling).
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md
+ recommendations: false Previously updated : 02/28/2022 Last updated : 04/06/2022 # Azure support for export controls
-**Disclaimer:** Customers are wholly responsible for ensuring their own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and customers should consult their legal advisors for any questions regarding regulatory compliance.
- To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls.
+> [!NOTE]
+> **Disclaimer:** You're wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article doesn't constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
+ ## Overview of export control laws Export related definitions vary somewhat among various export control regulations. In simplified terms, an export often implies a transfer of restricted information, materials, equipment, software, and so on, to a foreign person or foreign destination by any means. US export control policy is enforced through export control laws and regulations administered primarily by the Department of Commerce, Department of State, Department of Energy, Nuclear Regulatory Commission, and Department of Treasury. Respective agencies within each department are responsible for specific areas of export control based on their historical administration, as shown in Table 1.
Export related definitions vary somewhat among various export control regulation
|Regulator|Law/Regulation|Reference| ||||
-|**Department of Commerce: </br> Bureau of Industry and Security (BIS)**|- Export Administration Act (EAA) of 1979 </br>- Export Administration Regulations (EAR)|- [P.L. 96-72](https://www.govinfo.gov/link/statute/93/503) </br>- [15 CFR Parts 730 ΓÇô 774](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title15/15cfrv2_02.tpl)|
-|**Department of State: </br> Directorate of Defense Trade Controls (DDTC)**|- Arms Export Control Act (AECA) </br>- International Traffic in Arms Regulations (ITAR)|- [22 U.S.C. 39](https://uscode.house.gov/view.xhtml?path=/prelim@title22/chapter39&edition=prelim) </br>- [22 CFR Parts 120 ΓÇô 130](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title22/22cfr120_main_02.tpl)|
-|**Department of Energy: </br> National Nuclear Security Administration (NNSA)**|- Atomic Energy Act of 1954 (AEA) </br>- Assistance to Foreign Atomic Energy Activities|- [42 U.S.C. 2011 et. seq.](https://www.govinfo.gov/content/pkg/USCODE-2010-title42/html/USCODE-2010-title42-chap23-divsnA.htm) </br>- [10 CFR Part 810](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title10/10cfr810_main_02.tpl)|
-|**Nuclear Regulatory Commission (NRC)**|- Nuclear Non-Proliferation Act of 1978 </br>- Export and Import of Nuclear Equipment and Materials|- [P.L. 95-242](https://www.govinfo.gov/content/pkg/STATUTE-92/pdf/STATUTE-92-Pg120.pdf) </br>- [10 CFR Part 110](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title10/10cfr110_main_02.tpl)|
-|**Department of Treasury: </br> Office of Foreign Assets Control (OFAC)**|- Trading with the Enemy Act (TWEA) </br>- Foreign Assets Control Regulations|- [50 U.S.C. Sections 5 and 16](https://www.govinfo.gov/content/pkg/USCODE-2009-title50/pdf/USCODE-2009-title50-app-tradingwi.pdf) </br>- [31 CFR Part 500](http://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title31/31cfrv3_02.tpl)|
+|**Department of Commerce: </br> Bureau of Industry and Security (BIS)**|- Export Administration Act (EAA) of 1979 </br>- Export Administration Regulations (EAR)|- [P.L. 96-72](https://www.govinfo.gov/content/pkg/STATUTE-93/pdf/STATUTE-93-Pg503.pdf#page=1) </br>- [15 CFR Parts 730 ΓÇô 774](https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C)|
+|**Department of State: </br> Directorate of Defense Trade Controls (DDTC)**|- Arms Export Control Act (AECA) </br>- International Traffic in Arms Regulations (ITAR)|- [22 U.S.C. 39](https://uscode.house.gov/view.xhtml?path=/prelim@title22/chapter39&edition=prelim) </br>- [22 CFR Parts 120 ΓÇô 130](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M)|
+|**Department of Energy: </br> National Nuclear Security Administration (NNSA)**|- Atomic Energy Act of 1954 (AEA) </br>- Assistance to Foreign Atomic Energy Activities|- [42 U.S.C. 2011 et. seq.](https://www.govinfo.gov/content/pkg/USCODE-2010-title42/html/USCODE-2010-title42-chap23-divsnA.htm) </br>- [10 CFR Part 810](https://www.ecfr.gov/current/title-10/chapter-III/part-810?toc=1)|
+|**Nuclear Regulatory Commission (NRC)**|- Nuclear Non-Proliferation Act of 1978 </br>- Export and Import of Nuclear Equipment and Materials|- [P.L. 95-242](https://www.govinfo.gov/content/pkg/STATUTE-92/pdf/STATUTE-92-Pg120.pdf) </br>- [10 CFR Part 110](https://www.ecfr.gov/current/title-10/chapter-I/part-110?toc=1)|
+|**Department of Treasury: </br> Office of Foreign Assets Control (OFAC)**|- Trading with the Enemy Act (TWEA) </br>- Foreign Assets Control Regulations|- [50 U.S.C. Sections 5 and 16](https://www.govinfo.gov/content/pkg/USCODE-2009-title50/pdf/USCODE-2009-title50-app-tradingwi.pdf) </br>- [31 CFR Part 500](https://www.ecfr.gov/current/title-31/subtitle-B/chapter-V)|
This article contains a review of the current US export control regulations, considerations for cloud computing, and Azure features and commitments in support of export control requirements. ## EAR
-The US Department of Commerce is responsible for enforcing the [Export Administration Regulations](https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear) (EAR) through the [Bureau of Industry and Security](https://www.bis.doc.gov/) (BIS). According to BIS [definitions](https://www.bis.doc.gov/index.php/documents/regulation-docs/412-part-734-scope-of-the-export-administration-regulations/file), export is the transfer of protected technology or information to a foreign destination or release of protected technology or information to a foreign person in the United States (also known as deemed export). Items subject to the EAR can be found on the [Commerce Control List](https://www.bis.doc.gov/index.php/regulations/commerce-control-list-ccl) (CCL), and each item has a unique [Export Control Classification Number](https://www.bis.doc.gov/index.php/licensing/commerce-control-list-classification/export-control-classification-number-eccn) (ECCN) assigned. Items not listed on the CCL are designated as EAR99 and most EAR99 commercial products do not require a license to be exported. However, depending on the destination, end user, or end use of the item, even an EAR99 item may require a BIS export license.
+The US Department of Commerce is responsible for enforcing the [Export Administration Regulations](https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear) (EAR) through the [Bureau of Industry and Security](https://www.bis.doc.gov/) (BIS). According to BIS [definitions](https://www.bis.doc.gov/index.php/documents/regulation-docs/412-part-734-scope-of-the-export-administration-regulations/file), export is the transfer of protected technology or information to a foreign destination or release of protected technology or information to a foreign person in the United States, also known as deemed export. Items subject to the EAR can be found on the [Commerce Control List](https://www.bis.doc.gov/index.php/regulations/commerce-control-list-ccl) (CCL), and each item has a unique [Export Control Classification Number](https://www.bis.doc.gov/index.php/licensing/commerce-control-list-classification/export-control-classification-number-eccn) (ECCN) assigned. Items not listed on the CCL are designated as EAR99, and most EAR99 commercial products don't require a license to be exported. However, depending on the destination, end user, or end use of the item, even an EAR99 item may require a BIS export license.
+
+The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) aren't exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements wouldn't apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country, that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-740?toc=1) of the EAR, or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the *exporter* who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
+Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation.
-Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
+Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
-You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
+You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for EAR, see [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear).
+Azure Government provides an extra layer of protection through contractual commitments regarding storage of your customer data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for EAR, see [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear).
## ITAR
-The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/cgi-bin/text-idx?SID=8870638858a2595a32dedceb661c482c&mc=true&tpl=/ecfrbrowse/Title22/22CIsubchapM.tpl) (ITAR) managed by the [Directorate of Defense Trade Controls](http://www.pmddtc.state.gov/) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/current/title-22/part-121) (USML). Customers who are manufacturers, exporters, and brokers of defense articles, services, and related technologies as defined on the USML must be registered with DDTC, must understand and abide by ITAR, and must self-certify that they operate in accordance with ITAR.
+The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M) (ITAR) managed by the [Directorate of Defense Trade Controls](https://www.pmddtc.state.gov/ddtc_public?id=ddtc_public_portal_itar_landing) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/current/title-22/part-121) (USML). If you're a manufacturer, exporter, and broker of defense articles, services, and related technologies as defined on the USML, you must be registered with DDTC, must understand and abide by ITAR, and must self-certify that you operate in accordance with ITAR.
-DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
+DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the US Department of Commerce adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that don't constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M/part-126?toc=1) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet isn't deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party.
-There is no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
+There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
-You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
+You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for ITAR, see [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar).
+Azure Government provides an extra layer of protection through contractual commitments regarding storage of your customer data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for ITAR, see [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar).
## DoE 10 CFR Part 810
-The US Department of Energy (DoE) export control regulation [10 CFR Part 810](http://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It is administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/national-nuclear-security-administration) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will not be used for non-peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
+The US Department of Energy (DoE) export control regulation [10 CFR Part 810](https://www.ecfr.gov/current/title-10/chapter-III/part-810?toc=1) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It's administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/10-cfr-part-810) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will be used only for peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
-**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it is designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you are deploying data to Azure Government, you are responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA. For more information about Azure support for DoE 10 CFR Part 810, see [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810).
+**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it's designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you're deploying data to Azure Government, you're responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA. For more information about Azure support for DoE 10 CFR Part 810, see [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810).
## NRC 10 CFR Part 110
-The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible for the [Export and Import of Nuclear Equipment and Materials](https://www.nrc.gov/about-nrc/ip/export-import.html) under the [10 CFR Part 110](https://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) export control regulations. The NRC regulates the export and import of nuclear facilities and related equipment and materials. The NRC does not regulate nuclear technology and assistance related to these items, which are under the DoE jurisdiction. Therefore, the **NRC 10 CFR Part 110 regulations would not be applicable** to Azure or Azure Government.
+The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible for the [Export and import of nuclear equipment and materials](https://www.nrc.gov/about-nrc/ip/export-import.html) under the [10 CFR Part 110](https://www.ecfr.gov/current/title-10/chapter-I/part-110?toc=1) export control regulations. The NRC regulates the export and import of nuclear facilities and related equipment and materials. The NRC doesn't regulate nuclear technology and assistance related to these items, which are under the DoE jurisdiction. Therefore, the **NRC 10 CFR Part 110 regulations wouldn't be applicable** to Azure or Azure Government.
## OFAC Sanctions Laws
-The [Office of Foreign Assets Control](https://www.treasury.gov/about/organizational-structure/offices/Pages/Office-of-Foreign-Assets-Control.aspx) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
+The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/office-of-foreign-assets-control-sanctions-programs-and-information) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
-The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempted by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies for example that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
+The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempt by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies, for example, that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (like the guidance provided by BIS for the EAR) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications (including web sites) deployed on Azure. Microsoft does not block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with your applications deployed on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR, that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
-OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions are not intended to prevent a resident of a proscribed country from viewing a public web site.
+OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions aren't intended to prevent a resident of a proscribed country from viewing a public web site.
## Managing export control requirements
-You should assess carefully how your use of Azure may implicate US export controls and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks:
+You should assess carefully how your use of Azure may implicate US export controls, and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks:
-- **Ability to control data location** - You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.-- **End-to-end encryption** - Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party. Azure relies on FIPS 140 validated cryptographic modules in the underlying operating system, and provides you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control ([customer-managed keys](../security/fundamentals/encryption-models.md), CMK). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.-- **Control over access to data** - You can know and control who can access your data and on what terms. Microsoft technical support personnel do not need and do not have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.-- **Tools and protocols to prevent unauthorized deemed export/re-export** - Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export (or deemed re-export), because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who cannot read or understand the data while it is encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
+- **Ability to control data location** ΓÇô You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data isn't *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.
+- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.
+- **Control over access to data** ΓÇô You can know and control who can access your data and on what terms. Microsoft technical support personnel don't need and don't have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.
+- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
## Location of customer data
-Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and will not be moved to another region outside the United States.
+Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and won't be moved to another region outside the United States.
## Data encryption
Data encryption provides isolation assurances that are tied directly to encrypti
### FIPS 140 validated cryptography
-The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/2/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The current version of the standard, FIPS 140-2, has security requirements covering 11 areas related to the design and implementation of a cryptographic module. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
+The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The current version of the standard, FIPS 140-3, has security requirements covering 11 areas related to the design and implementation of a cryptographic module. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs).
Azure provides many options for [encrypting data in transit](../security/fundame
Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It's secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data and can view it and those users who manage the data but should have no access.
## Restrictions on insider access
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
description: This article describes the different management tasks that you will
Previously updated : 06/14/2019 Last updated : 04/06/2022
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
Title: Collect Syslog data sources with Log Analytics agent in Azure Monitor description: Syslog is an event logging protocol that is common to Linux. This article describes how to configure collection of Syslog messages in Log Analytics and details of the records they create. -- Previously updated : 02/26/2021 Last updated : 04/06/2022
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
Title: Collect Windows event log data sources with Log Analytics agent in Azure Monitor description: Describes how to configure the collection of Windows Event logs by Azure Monitor and details of the records they create. -- Previously updated : 02/26/2021 Last updated : 04/06/2022
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Title: Azure Diagnostics extension overview description: Use Azure diagnostics for debugging, measuring performance, monitoring, traffic analysis in cloud services, virtual machines and service fabric -- Previously updated : 02/14/2020 Last updated : 04/06/2022
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
Title: Connect computers by using the Log Analytics gateway | Microsoft Docs description: Connect your devices and Operations Manager-monitored computers by using the Log Analytics gateway to send data to the Azure Automation and Log Analytics service when they do not have internet access. -- Previously updated : 12/24/2019 Last updated : 04/06/2022
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
There are many ways to explore Application Insights telemetry. For more informat
## Next steps -- [Manage usage and costs for Application Insights](pricing.md#manage-usage-and-costs-for-application-insights) - [Instrument your web pages](./javascript.md) for page view, AJAX, and other client-side telemetry. - [Analyze mobile app usage](../app/mobile-center-quickstart.md) by integrating with Visual Studio App Center. - [Monitor availability with URL ping tests](./monitor-web-app-availability.md) to your website from Application Insights servers.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
*In Application Insights, I only see a fraction of the events that are being generated by my app.* * If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the **Overview** in the portal on the left) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
-* It's possible that you're hitting a [data rate limit](../../azure-monitor/app/pricing.md#limits-summary) for your pricing plan. These limits are applied per minute.
+* It's possible that you're hitting a [data rate limit](../service-limits.md#application-insights) for your pricing plan. These limits are applied per minute.
*I'm randomly experiencing data loss.*
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
This article will cover how to create an Azure Function with TrackAvailability()
> [!NOTE] > This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating a basic availability HTTP GET test.
+> To follow these instructions, you must use the [dedicated plan](https://docs.microsoft.com/azure/azure-functions/dedicated-plan) to allow editing code in App Service Editor.
## Create a timer trigger function
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights allows you to take advantage of all the lat
- Encryption-at-rest policy - Lifetime management policy - Network access for all data associated with Application Insights Profiler and Snapshot Debugger
-* [Commitment Tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
+* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
* Faster data ingestion via Log Analytics streaming ingestion. ## Migration process When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
-Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/manage-cost-storage.md#change-the-data-retention-period) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/manage-cost-storage.md#retention-by-data-type).
+Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed. <!-- This note duplicates information in pricing.md. Understanding workspace-based usage and costs has been added as a migration prerequisite.
Once the migration is complete, you can use [diagnostic settings](../essentials/
> - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period, you may need to adjust your workspace retention settings. > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. -- Understand [Workspace-based Application Insights](pricing.md#workspace-based-application-insights) usage and costs.
+- Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
## Migrate your resource
Once your resource is migrated, you'll see the corresponding workspace info in t
Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. > [!NOTE]
-> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs instead of the cap in Application Insights.
+> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
## Understanding log queries
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Workspace-based resources support full integration between Application Insights
This also allows for common Azure role-based access control (Azure RBAC) across your resources, and eliminates the need for cross-app/workspace queries. > [!NOTE]
-> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more]( ./pricing.md#workspace-based-application-insights) about billing for workspace-based Application Insights resources.
+> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more](../logs/cost-logs.md) about billing for workspace-based Application Insights resources.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Workspace-based Application Insights allows you to take advantage of the latest
* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. * [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
-* [Commitment Tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 30% compared to the Pay-As-You-Go price.
+* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price.
* Faster data ingestion via Log Analytics streaming ingestion. ## Create workspace-based resource
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
For web pages, open your browser's debugging window.
This would be possible by writing a [telemetry processor plugin](./api-filtering-sampling.md). ## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](./pricing.md#change-the-data-retention-period) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
Data kept longer than 90 days will incur addition charges. Learn more about Application Insights pricing on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
## But what about...? * [Privacy and storage](./data-retention-privacy.md) - Your telemetry is kept on Azure secure servers. * Performance - the impact is very low. Telemetry is batched.
-* [Pricing](./pricing.md) - You can get started for free, and that continues while you're in low volume.
+* [Pricing](../logs/cost-logs.md#application-insights-billing) - You can get started for free, and that continues while you're in low volume.
## Next steps
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
In addition to the out-of-the-box telemetry sent by Application Insights SDK, yo
### <a name="limits"></a>How much data is retained?
-See the [Limits summary](./pricing.md#limits-summary).
+See the [Limits summary](../service-limits.md#application-insights).
### How can I see POST data in my server requests?
azure-monitor Legacy Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/legacy-pricing.md
+
+ Title: Application Insights legacy enterprise (per node) pricing tier
+description: Describes the legacy pricing tier for Application Insights.
+ Last updated : 02/18/2022+
+
+# Application Insights legacy enterprise (per node) pricing tier
+For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
+
+These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
+
+The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
+
+For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/).
+
+## Understanding billed usage on the legacy Enterprise (Per Node) tier
+
+As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your billed usage with the usage you observe for each Application Insights resource complicated.
+
+> [!WARNING]
+> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier.
+
+## Per Node tier and Operations Management Suite subscription entitlements
+
+Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
+
+Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
+
+> [!NOTE]
+> To ensure that you get this entitlement, your Application Insights resources must be in the Per Node pricing tier. This entitlement applies only as nodes. Application Insights resources in the Per GB tier don't realize any benefit.
+> This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane. Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+
+## How the Per Node tier works
+
+* You pay for each node that sends telemetry for any apps in the Per Node tier.
+ * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
+ * Development machines, client browsers, and mobile devices don't count as nodes.
+ * If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately.
+ * [Live Metrics Stream](../app/live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes.
+* Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
+* A data volume allocation of 200 MB per day is given for each node that's detected (with hourly granularity). Unused data allocation isn't carried over from one day to the next.
+ * If you choose the Per Node pricing tier, each subscription gets a daily allowance of data based on the number of nodes that send telemetry to the Application Insights resources in that subscription. So, if you have five nodes that send data all day, you'll have a pooled allowance of 1 GB applied to all Application Insights resources in that subscription. It doesn't matter if certain nodes send more data than other nodes because the included data is shared across all nodes. If on a given day, the Application Insights resources receive more data than is included in the daily data allocation for this subscription, the per-GB overage data charges apply.
+ * The daily data allowance is calculated as the number of hours in the day (using UTC) that each node sends telemetry divided by 24 multiplied by 200 MB. So, if you have four nodes that send telemetry during 15 of the 24 hours in the day, the included data for that day would be ((4 &#215; 15) / 24) &#215; 200 MB = 500 MB. At the price of 2.30 USD per GB for data overage, the charge would be 1.15 USD if the nodes send 1 GB of data that day.
+ * The Per Node tier daily allowance isn't shared with applications for which you have chosen the Per GB tier. Unused allowance isn't carried over from day-to-day.
+
+## Examples of how to determine distinct node count
+
+| Scenario | Total daily node count |
+|:|:-:|
+| 1 application using 3 Azure App Service instances and 1 virtual server | 4 |
+| 3 applications running on 2 VMs; the Application Insights resources for these applications are in the same subscription and in the Per Node tier | 2 |
+| 4 applications whose Applications Insights resources are in the same subscription; each application running 2 instances during 16 off-peak hours, and 4 instances during 8 peak hours | 13.33 |
+| Cloud services with 1 Worker Role and 1 Web Role, each running 2 instances | 4 |
+| A 5-node Azure Service Fabric cluster running 50 microservices; each microservice running 3 instances | 5|
+
+* The precise node counting depends on which Application Insights SDK your application is using.
+ * In SDK versions 2.2 and later, both the Application Insights [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) and the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) report each application host as a node. Examples are the computer name for physical server and VM hosts or the instance name for cloud services. The only exception is an application that uses only the [.NET Core](https://dotnet.github.io/) and the Application Insights Core SDK. In that case, only one node is reported for all hosts because the host name isn't available.
+ * For earlier versions of the SDK, the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) behaves like the newer SDK versions, but the [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) reports only one node, regardless of the number of application hosts.
+ * If your application uses the SDK to set **roleInstance** to a custom value, by default, that same value is used to determine node count.
+ * If you're using a new SDK version with an app that runs from client machines or mobile devices, the node count might return a number that's large (because of the large number of client machines or mobile devices).
+++++
+## Next steps
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
While the above sample is for a console app, the same code can be used in any .N
|**Latency**|Data displayed within one second|Aggregated over minutes| |**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)| |**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There is no charge for Live Stream data|Subject to [pricing](./pricing.md)
+|**Free**|There is no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)| |**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](pricing.md), and create other Azure resources.
+This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources.
The key to creating these resources is JSON templates for [Azure Resource Manager](../../azure-resource-manager/management/manage-resources-powershell.md). The basic procedure is: download the JSON definitions of existing resources; parameterize certain values such as names; and then run the template whenever you want to create a new resource. You can package several resources together, to create them all in one go - for example, an app monitor with availability tests, alerts, and storage for continuous export. There are some subtleties to some of the parameterizations, which we'll explain here.
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
## Custom metrics dimensions and pre-aggregation
-All metrics that you send using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. However, while the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](./pricing.md) tab by checking "Enable alerting on custom metric dimensions":
+All metrics that you send using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. However, while the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by checking "Enable alerting on custom metric dimensions":
![Usage and estimated cost](./media/pre-aggregated-metrics-log-metrics/001-cost.png)
Use [Azure Monitor Metrics Explorer](../essentials/metrics-getting-started.md) t
## Pricing models for Application Insights metrics
-Ingesting metrics into Application Insights, whether log-based or pre-aggregated, will generate costs based on the size of the ingested data, as described [here](./pricing.md#pricing-model). Your custom metrics, including all its dimensions, are always stored in the Application Insights log-store; additionally, a pre-aggregated version of your custom metrics (with no dimensions) is forwarded to the metrics store by default.
+Ingesting metrics into Application Insights, whether log-based or pre-aggregated, will generate costs based on the size of the ingested data, as described in [Azure Monitor Logs pricing details](../logs/cost-logs.md#application-insights-billing). Your custom metrics, including all its dimensions, are always stored in the Application Insights log-store; additionally, a pre-aggregated version of your custom metrics (with no dimensions) is forwarded to the metrics store by default.
Selecting the [Enable alerting on custom metric dimensions](#custom-metrics-dimensions-and-pre-aggregation) option to store all dimensions of the pre-aggregated metrics in the metric store, can generate **additional** costs based on [Custom Metrics pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
- Title: Manage usage and costs for Azure Application Insights | Microsoft Docs
-description: Manage telemetry volumes and monitor costs in Application Insights.
-- Previously updated : 02/17/2021---
-# Manage usage and costs for Application Insights
-
-This article describes how to proactively monitor and control Application Insights costs.
-
-[Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes usage and estimated costs across Azure Monitor features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill).
-
-> [!NOTE]
-> All prices and costs in this article are for example purposes only.
-
-<! App Insights monitoring features (availability, performance, usage, etc. ) Supported languages, integration with specific tools (Azure DevOps, Jira, and PagerDuty, etc.) should be documented elsewhere. (e.g. platforms.md) -->
-
-If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
-
-## Pricing model
-
-The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There's no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
-
-[Multi-step web tests](./availability-multistep.md) incur extra charges. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
-
-The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also increase costs because this can result in the creation of more pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
-
-### Workspace-based Application Insights
-
-For Application Insights resources which send their data to a Log Analytics workspace, called [workspace-based Application Insights resources](create-workspace-resource.md), the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics [pricing model](../logs/manage-cost-storage.md#pricing-model), including **Commitment Tiers** in addition to Pay-As-You-Go. Commitment Tiers offer pricing up to 30% lower than Pay-As-You-Go. Log Analytics also has more options for data retention, including [retention by data type](../logs/manage-cost-storage.md#retention-by-data-type). Application Insights data types in the workspace receive 90 days of retention without charges. Usage of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. Learn how to track data ingestion and retention costs in Log Analytics using the [Usage and estimated costs](../logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs), [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill) and [Log Analytics queries](#data-volume-for-workspace-based-application-insights-resources).
-
-## Estimating the costs to manage your application
-
-If you're not yet using Application Insights, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Application Insights. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and expand the Application Insights section. Your estimated costs depend on the amount of log data ingested. There are two approaches to estimate data volumes:
-
-1. estimate your likely data ingestion based on what other similar applications generate, or
-2. use of default monitoring and adaptive sampling, which is available in the ASP.NET SDK.
-
-### Learn from what similar applications collect
-
-In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you'll collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
-
-### Data collection when using sampling
-
-With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
-
-For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](./sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
-
-## Viewing Application Insights usage on your Azure bill
-
-The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-based resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
-
-To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
-
-> [!NOTE]
-> Application Insights billing for data ingestion and data retention is reported as coming from the **Log Analytics** service (Meter category in Azure Cost Management + Billing).
-
-Even more understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
-In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column, which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there's a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
-
-## Understand your usage and optimizing your costs
-<a name="understand-your-usage-and-estimate-costs"></a>
-
-Application Insights makes it easy to understand what your costs are likely to be based on recent usage patterns. To get started, in the Azure portal, for the Application Insights resource, go to the **Usage and estimated costs** page:
-
-![Choose pricing](./media/pricing/pricing-001.png)
-
-A. Review your data volume for the month. This includes all the data that's received and retained (after any [sampling](./sampling.md)) from your server and client apps, and from availability tests.
-B. A separate charge is made for [Multi-step web tests](./availability-multistep.md). (This doesn't include simple availability tests, which are included in the data volume charge.)
-C. View data volume trends for the past month.
-D. Enable data ingestion [sampling](./sampling.md).
-E. Set the daily data volume cap.
-
-(All prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
-
-To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named "Data point volume", and then select the *Apply splitting* option to split the data by "Telemetry item type".
-
-Application Insights charges are added to your Azure bill. You can see details of your Azure bill in the **Cost Management + Billing** section of the Azure portal, or in the [Azure billing portal](https://account.windowsazure.com/Subscriptions). [See below](#viewing-application-insights-usage-on-your-azure-bill) for details on using this for Application Insights.
-
-![In the left menu, select Billing](./media/pricing/02-billing.png)
-
-### Using data volume metrics
-<a id="understanding-ingested-data-volume"></a>
-
-To learn more about your data volumes, selecting **Metrics** for your Application Insights resource, add a new chart. For the chart metric, under **Log-based metrics**, select **Data point volume**. Click **Apply splitting**, and select group by **`Telemetryitem` type**.
-
-![Use Metrics to look at data volume](./media/pricing/10-billing.png)
-
-### Queries to understand data volume details
-
-There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` won't have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
-
-#### Using aggregated data volume information
-
-For instance, you can use the `systemEvents` table to see the data volume ingested in the last 24 hours with the query:
-
-```kusto
-systemEvents
-| where timestamp >= ago(24h)
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes)
-```
-
-Or to see a chart of data volume (in bytes) by data type for the last 30 days, you can use:
-
-```kusto
-systemEvents
-| where timestamp >= startofday(ago(30d))
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d) | render barchart
-```
-
-This query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
-
-To learn more about your telemetry data changes, we can get the count of events by type using the query:
-
-```kusto
-systemEvents
-| where timestamp >= startofday(ago(30d))
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| summarize count() by BillingTelemetryType, bin(timestamp, 1d)
-| render barchart
-```
-
-#### Using data size per event information
-
-To learn more details about the source of your data volumes, you can use the `_BilledSize` property that is present on each ingested event.
-
-For example, to look at which operations generate the most data volume in the last 30 days, we can sum `_BilledSize` for all dependency events:
-
-```kusto
-dependencies
-| where timestamp >= startofday(ago(30d))
-| summarize sum(_BilledSize) by operation_Name
-| render barchart
-```
-
-#### Data volume for workspace-based Application Insights resources
-
-To look at the data volume trends for all of the [workspace-based Application Insights resources](create-workspace-resource.md) in a workspace for the last week, go to the Log Analytics workspace and run the query:
-
-```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
-| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
-| summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d)
-| render areachart
-```
-
-To query the data volume trends by type for a specific workspace-based Application Insights resource, in the Log Analytics workspace use:
-
-```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
-| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
-| where _ResourceId contains "<myAppInsightsResourceName>"
-| summarize sum(_BilledSize) by Type, bin(TimeGenerated, 1d)
-| render areachart
-```
-
-## Managing your data volume
-
-The volume of data you send can be managed using the following techniques:
-
-* **Sampling**: You can use sampling to reduce the amount of telemetry that's sent from your server and client apps, with minimal distortion of metrics. Sampling is the primary tool you can use to tune the amount of data you send. Learn more about [sampling features](./sampling.md).
-
-* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
-
-* **Disable unneeded modules**: [Edit ApplicationInsights.config](./configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data are inessential.
-
-* **Pre-aggregate metrics**: If you put calls to TrackMetric in your app, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Or, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
-
-* **Daily cap**: When you create an Application Insights resource in the Azure portal, the daily cap is set to 100 GB/day. When you create an Application Insights resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production.
-
- The maximum cap in Application Insights is 1,000 GB/day unless you request a higher maximum for a high-traffic application.
-
- > [!TIP]
- > If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs instead of the cap in Application Insights.
-
- Warning emails about the daily cap are sent to account that are members of these roles for your Application Insights resource: "ServiceAdmin", "AccountAdmin", "CoAdmin", "Owner".
-
- Use care when you set the daily cap. Your intent should be to *never hit the daily cap*. If you hit the daily cap, you lose data for the remainder of the day, and you can't monitor your application. To change the daily cap, use the **Daily volume cap** option. You can access this option in the **Usage and estimated costs** pane (this is described in more detail later in the article).
-
- We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
-
-* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
--
-## Manage your maximum daily data volume
-
-You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
-
-> [!WARNING]
-> If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs. The daily cap in Application Insights may not limit ingestion in all cases to the selected level. (If your Application Insights resource is ingesting a lot of data, the Application Insights daily cap might need to be raised.)
-
-Instead of using the daily volume cap, use [sampling](./sampling.md) to tune the data volume to the level you want. Then, use the daily cap only as a "last resort" in case your application unexpectedly begins to send much higher volumes of telemetry.
-
-### Identify what daily data limit to define
-
-Review Application Insights Usage and estimated costs to understand the data ingestion trend and what is the daily volume cap to define. It should be considered with care, since you won't be able to monitor your resources after the limit is reached.
-
-### Set the Daily Cap
-
-To change the daily cap, in the **Configure** section of your Application Insights resource, in the **Usage and estimated costs** page, select **Daily Cap**.
-
-![Adjust the daily telemetry volume cap](./media/pricing/pricing-003.png)
-
-To [change the daily cap via Azure Resource Manager](./powershell.md), the property to change is the `dailyQuota`. Via Azure Resource Manager you can also set the `dailyQuotaResetTime` and the daily cap's `warningThreshold`.
-
-### Create alerts for the Daily Cap
-
-The Application Insights Daily Cap creates an event in the Azure activity log when the ingested data volumes reaches the warning level or the daily cap level. You can [create an alert based on these activity log events](../alerts/alerts-activity-log.md#azure-portal). The signal names for these events are:
-
-* Application Insights component daily cap warning threshold reached
-
-* Application Insights component daily cap reached
-
-## Sampling
-[sampling](./sampling.md) is a method of reducing the rate at which telemetry is sent to your app, while retaining the ability to find related events during diagnostic searches. You also retain correct event counts.
-
-Sampling is an effective way to reduce charges and stay within your monthly quota. The sampling algorithm retains related items of telemetry so, for example, when you use Search, you can find the request related to a particular exception. The algorithm also retains correct counts so you see the correct values in Metric Explorer for request rates, exception rates, and other counts.
-
-There are several forms of sampling.
-
-* [Adaptive sampling](./sampling.md) is the default for the ASP.NET SDK. Adaptive sampling automatically adjusts to the volume of telemetry that your app sends. It operates automatically in the SDK in your web app so that telemetry traffic on the network is reduced.
-* *Ingestion sampling* is an alternative that operates at the point where telemetry from your app enters the Application Insights service. Ingestion sampling doesn't affect the volume of telemetry sent from your app, but it reduces the volume that's retained by the service. You can use ingestion sampling to reduce the quota that's used up by telemetry from browsers and other SDKs.
-
-To set ingestion sampling, go to the **Pricing** pane:
-
-![In the Quota and pricing pane, select the Samples tile, and then select a sampling fraction](./media/pricing/pricing-004.png)
-
-> [!WARNING]
-> The **Data sampling** pane controls only the value of ingestion sampling. It doesn't reflect the sampling rate that's applied by the Application Insights SDK in your app. If the incoming telemetry has already been sampled in the SDK, ingestion sampling isn't applied.
->
-
-To discover the actual sampling rate, no matter where it's been applied, use an [Analytics query](../logs/log-query-overview.md). The query looks like this:
-
-```kusto
-requests | where timestamp > ago(1d)
-| summarize 100/avg(itemCount) by bin(timestamp, 1h)
-| render areachart
-```
-
-In each retained record, `itemCount` indicates the number of original records that it represents. It's equal to 1 + the number of previous discarded records.
-
-## Change the data retention period
-
-The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention.
-
-To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option:
-
-![Screenshot that shows where to change the data retention period.](./media/pricing/pricing-005.png)
-
-A several-day grace period begins when the retention is lowered before the oldest data is removed.
-
-The retention can also be [set programatically using PowerShell](powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
-
-## Data transfer charges using Application Insights
-
-Sending data to Application Insights might incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is very small (few %) compared to the costs for Application Insights log data ingestion. Consequently controlling costs for Log Analytics needs to focus on your ingested data volume, and we have guidance to help understand that [here](#managing-your-data-volume).
-
-## Limits summary
--
-## Disable daily cap e-mails
-
-To disable the daily volume cap e-mails, under the **Configure** section of your Application Insights resource, in the **Usage and estimated costs** pane, select **Daily Cap**. There are settings to send e-mail when the cap is reached, as well as when an adjustable warning level has been reached. If you wish to disable all daily cap volume-related emails, uncheck both boxes.
-
-## Legacy Enterprise (Per Node) pricing tier
-
-For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
-
-These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
-
-The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
-
-For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/).
-
-### Understanding billed usage on the legacy Enterprise (Per Node) tier
-
-As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resource complicated.
-
-> [!WARNING]
-> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier.
-
-### Per Node tier and Operations Management Suite subscription entitlements
-
-Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
-
-Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
-
-> [!NOTE]
-> To ensure that you get this entitlement, your Application Insights resources must be in the Per Node pricing tier. This entitlement applies only as nodes. Application Insights resources in the Per GB tier don't realize any benefit.
-> This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane. Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
-
-### How the Per Node tier works
-
-* You pay for each node that sends telemetry for any apps in the Per Node tier.
- * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
- * Development machines, client browsers, and mobile devices don't count as nodes.
- * If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately.
- * [Live Metrics Stream](./live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes.
-* Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
-* A data volume allocation of 200 MB per day is given for each node that's detected (with hourly granularity). Unused data allocation isn't carried over from one day to the next.
- * If you choose the Per Node pricing tier, each subscription gets a daily allowance of data based on the number of nodes that send telemetry to the Application Insights resources in that subscription. So, if you have five nodes that send data all day, you'll have a pooled allowance of 1 GB applied to all Application Insights resources in that subscription. It doesn't matter if certain nodes send more data than other nodes because the included data is shared across all nodes. If on a given day, the Application Insights resources receive more data than is included in the daily data allocation for this subscription, the per-GB overage data charges apply.
- * The daily data allowance is calculated as the number of hours in the day (using UTC) that each node sends telemetry divided by 24 multiplied by 200 MB. So, if you have four nodes that send telemetry during 15 of the 24 hours in the day, the included data for that day would be ((4 &#215; 15) / 24) &#215; 200 MB = 500 MB. At the price of 2.30 USD per GB for data overage, the charge would be 1.15 USD if the nodes send 1 GB of data that day.
- * The Per Node tier daily allowance isn't shared with applications for which you have chosen the Per GB tier. Unused allowance isn't carried over from day-to-day.
-
-### Examples of how to determine distinct node count
-
-| Scenario | Total daily node count |
-|:|:-:|
-| 1 application using 3 Azure App Service instances and 1 virtual server | 4 |
-| 3 applications running on 2 VMs; the Application Insights resources for these applications are in the same subscription and in the Per Node tier | 2 |
-| 4 applications whose Applications Insights resources are in the same subscription; each application running 2 instances during 16 off-peak hours, and 4 instances during 8 peak hours | 13.33 |
-| Cloud services with 1 Worker Role and 1 Web Role, each running 2 instances | 4 |
-| A 5-node Azure Service Fabric cluster running 50 microservices; each microservice running 3 instances | 5|
-
-* The precise node counting depends on which Application Insights SDK your application is using.
- * In SDK versions 2.2 and later, both the Application Insights [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) and the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) report each application host as a node. Examples are the computer name for physical server and VM hosts or the instance name for cloud services. The only exception is an application that uses only the [.NET Core](https://dotnet.github.io/) and the Application Insights Core SDK. In that case, only one node is reported for all hosts because the host name isn't available.
- * For earlier versions of the SDK, the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) behaves like the newer SDK versions, but the [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) reports only one node, regardless of the number of application hosts.
- * If your application uses the SDK to set **roleInstance** to a custom value, by default, that same value is used to determine node count.
- * If you're using a new SDK version with an app that runs from client machines or mobile devices, the node count might return a number that's large (because of the large number of client machines or mobile devices).
-
-## Automation
-
-You can write a script to set the pricing tier by using Azure Resource Management. [Learn how](powershell.md#price).
-
-## Next steps
-
-[Sampling](./sampling.md) in Application Insights is the recommended way to reduce telemetry traffic, data costs, and storage costs.
-
-[api]: app-insights-api-custom-events-metrics.md
-[apiproperties]: app-insights-api-custom-events-metrics.md#properties
-[start]: ./app-insights-overview.md
-[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
-[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
-
-## Troubleshooting
-
-### Unexpected usage or estimated cost
-
-Lower your bill with updated versions of the ASP.NET Core SDK and Worker Service SDK, which [don't collect counters by default](eventcounters.md#default-counters-collected).
-
-### Microsoft Q&A question page
-
-If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
The above code will disable adaptive sampling. Follow the steps below to add sam
Use extension methods of `TelemetryProcessorChainBuilder` as shown below to customize sampling behavior. > [!IMPORTANT]
-> If you use this method to configure sampling, please make sure to set the `aiOptions.EnableAdaptiveSampling` property to `false` when calling `AddApplicationInsightsTelemetry()`. After making this change, you then need to follow the instructions in the code block below **exactly** in order to re-enable adaptive sampling with your customizations in place. Failure to do so can result in excess data ingestion. Always test post changing sampling settings, and set an appropriate [daily data cap](pricing.md#set-the-daily-cap) to help control your costs.
+> If you use this method to configure sampling, please make sure to set the `aiOptions.EnableAdaptiveSampling` property to `false` when calling `AddApplicationInsightsTelemetry()`. After making this change, you then need to follow the instructions in the code block below **exactly** in order to re-enable adaptive sampling with your customizations in place. Failure to do so can result in excess data ingestion. Always test post changing sampling settings, and set an appropriate [daily data cap](../logs/daily-cap.md) to help control your costs.
```csharp using Microsoft.ApplicationInsights.Extensibility
In general, for most small and medium size applications you don't need sampling.
The main advantages of sampling are: * Application Insights service drops ("throttles") data points when your app sends a very high rate of telemetry in a short time interval. Sampling reduces the likelihood that your application will see throttling occur.
-* To keep within the [quota](pricing.md) of data points for your pricing tier.
+* To keep within the [quota](../logs/daily-cap.md) of data points for your pricing tier.
* To reduce network traffic from the collection of telemetry. ### Which type of sampling should I use?
azure-monitor Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/windows-desktop.md
using Microsoft.ApplicationInsights;
By default this SDK will collect and store the computer name of the system emitting telemetry.
-Computer name is used by Application Insights [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) for internal billing purposes. By default if you use a telemetry initializer to override `telemetry.Context.Cloud.RoleInstance`, a separate property `ai.internal.nodeName` will be sent which will still contain the computer name value. This value will not be stored with your Application Insights telemetry, but is used internally at ingestion to allow for backwards compatibility with the legacy node-based billing model.
+Computer name is used by Application Insights [Legacy Enterprise (Per Node) pricing tier](../logs/cost-logs.md#legacy-pricing-tiers) for internal billing purposes. By default if you use a telemetry initializer to override `telemetry.Context.Cloud.RoleInstance`, a separate property `ai.internal.nodeName` will be sent which will still contain the computer name value. This value will not be stored with your Application Insights telemetry, but is used internally at ingestion to allow for backwards compatibility with the legacy node-based billing model.
-If you are on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) and simply need to override storage of the computer name, use a telemetry Initializer:
+If you are on the Legacy Enterprise (Per Node) pricing tier and simply need to override storage of the computer name, use a telemetry Initializer:
**Write custom TelemetryInitializer as below.**
Instantiate the initializer in the `Program.cs` `Main()` method below setting th
## Override transmission of computer name
-If you aren't on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier) and wish to completely prevent any telemetry containing computer name from being sent, you need to use a telemetry processor.
+If you aren't on the Legacy Enterprise (Per Node) pricing tier and wish to completely prevent any telemetry containing computer name from being sent, you need to use a telemetry processor.
### Telemetry processor
namespace WindowsFormsApp2
``` > [!NOTE]
-> While you can technically use a telemetry processor as described above even if you are on the [Legacy Enterprise (Per Node) pricing tier](./pricing.md#legacy-enterprise-per-node-pricing-tier), this will result in the potential for over-billing due to the inability to properly distinguish nodes for per node pricing.
+> While you can technically use a telemetry processor as described above even if you are on the Legacy Enterprise (Per Node) pricing tier, this will result in the potential for over-billing due to the inability to properly distinguish nodes for per node pricing.
## Next steps * [Create a dashboard](./overview-dashboard.md)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
+
+ Title: Azure Monitor best practices - Cost management
+description: Guidance and recommendations for reducing your cost for Azure Monitor.
+++ Last updated : 03/31/2022+++
+# Azure Monitor best practices - Cost management
+This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost effective manner. This includes leveraging cost saving features and ensuring that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
++
+## Understand Azure Monitor charges
+You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges.
+
+## Configure workspaces
+You can start using Azure Monitor with a single Log Analytics workspace using default options. As your monitoring environment grows though, you will need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces, and you want to evaluate configuration options that allow you to reduce your monitoring costs.
+
+### Configure pricing tier or dedicated cluster
+By default, workspaces will use Pay-As-You-Go pricing with no minimum data volume. If you collect a sufficient amount of data though, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
+
+[Dedicated clusters](logs/logs-dedicated-clusters.md) provide additional functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach the 500 GB.
+
+See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
+
+### Optimize workspace configuration
+As your monitoring environment becomes more complex, you will need to consider whether to create additional Log Analytics workspaces. This may be as you place resources in additional regions or as you implement additional services that use workspaces such as Azure Sentinel and Microsoft Defender for Cloud.
+
+There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from . See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment.
+
+## Configure tables in each workspace
+Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You may be collecting data though that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by configuring Basic Logs and by optimizing your data retention and archiving.
+
+### Configure data retention and archiving
+Data collected in a Log Analytics workspace is retained for 31 days at no charge (90 days if Azure Sentinel is enabled on the workspace). You can retain data beyond the default for trending analysis or other reporting, but there is a charge for this retention.
+
+Your retention requirement may just be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md) which allows you to retain data long term (up to 7 years) at a significantly reduced cost. There is a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data though, this cost will be more than offset by the reduced retention cost.
+
+You can configure retention and archiving for all tables in a workspace or configure each table separately. This allows you to optimize your costs by setting only the retention you require for each data type.
+
+### Configure Basic Logs (preview)
+You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting and auditing as [Basic Logs](logs/basic-logs-configure.md). Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there is a cost for querying them. If you query these tables infrequently though, this query cost can be more than offset by the reduced ingestion cost.
+
+The decision whether to configure a table for Basic Logs is based on the following criteria:
+
+- The table currently support Basic Logs.
+- You don't require more than eight days of data retention for the table.
+- You only require basic queries of the data using a limited version of the query language.
+- The cost savings for data ingestion over a month exceeds the expected cost for any expected queries
+
+See [Query Basic Logs in Azure Monitor (Preview)](/logs/basic-logs-query.md) for details on query limitations and [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more details about them.
+
+## Reduce the amount of data collected
+The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. If you find that you're collecting data that's not being used for alerting or analysis, then you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
+
+The configuration change will vary depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
+
+## Virtual machines
+Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
++
+| Source | Strategy | Log Analytics agent | Azure Monitor agent |
+|:|:|:|:|
+| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific event IDs. |
+| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific events. |
+| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific counters. |
++
+### Use transformations to filter events
+The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still may be collecting records that provide little value. Use [transformations](essentials/data-collection-rule-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+See the section below on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
+
+### Multi-homing agents
+You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace.
+
+You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](logs/../agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine.
+
+## Application Insights
+There are multiple methods that you can use to limit the amount of data collected by Application Insights.
+
+* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
+
+* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-correlation).
+
+* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
+
+* **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
+
+* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs because this can result in the creation of more pre-aggregation metrics.
+
+* **Ensure use of updated SDKs**: Earlier version of the ASP.NET Core SDK and Worker Service SDK [collect a large number of counters by default](app/eventcounters.md#default-counters-collected) which collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
+## Resource logs
+The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You may also not want to collect platform metrics from Azure resources since this data is already being collected in Metrics. Only configured your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+
+Diagnostic settings do not allow granular filtering of resource logs. You may require certain logs in a particular category but not others. In this case, use [ingestion-time transformations](logs/ingestion-time-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
+
+## Other insights and services
+See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage. Following
+
+- **Container insights** - [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)
+- **Microsoft Sentinel** - [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)
+- **Defender for Cloud** - [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)
+++
+## Filter data with transformations (preview)
+[Data collection rule transformations in Azure Monitor](essentials/data-collection-rule-transformations.md) allow you to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
+
+Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might send a variety of records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
+
+You can also ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting, but you don't require certain columns in those records that contain a large amount of data. Create a transformation for that table that removes those columns.
+
+The following table for methods to apply transformations to different workflows.
+
+> {!NOTE]
+> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor Reference](/azure/azure-monitor-reference). Custom tables are created by custom applications and have a suffix of *_CL* ion their name.
+
+| Source | Target | Description | Filtering method |
+|:|:|:|:|
+| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in DCR to collect specific data from client machine. Ingestion-time transformations in agent DCR are not yet supported. |
+| Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |
+| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
+| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new custom logs API. |
+| Data Collector API | Custom tables | Use [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new custom logs API. |
+| Custom Logs API | Custom tables<br>Azure tables | Use [Custom Logs API](logs/custom-logs-overview.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
+| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
++
+## Monitor workspace and analyze usage
+Once you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have additional opportunities to reduce your usage, such as further filtering out collected data that has not proven to be useful.
++
+### Set a daily cap
+A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day once your configured limit is reached. This should not be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
+
+When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Rather than just relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. This allows you address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+
+See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for details on how the daily cap works and how to configure one.
+### Send alert when data collection is high
+In order to avoid unexpected bills, you should be proactively notified any time you experience excessive usage. This allows you to address any potential anomalies before the end of your billing period.
+
+The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this will result in a higher charge for the alert rule.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
+| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
+| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
+| Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Billable data volume greater than 50 GB in 24 hours |
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on using log queries like the one used here to analyze billable usage in your workspace.
+
+## Analyze your collected data
+When you detect an increase in data collection, then you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
+
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes a variety of log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
+
+## Next steps
+
+- See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
This article is part of the scenario [Recommendations for configuring Azure Moni
## Create Log Analytics workspace You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor. You can start with a single workspace to support this monitoring, but see [Designing your Azure Monitor Logs deployment](logs/design-logs-deployment.md) for guidance on when to use multiple workspaces.
-There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) for details.
+There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details.
See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces though, this is often not required since most environments will require a minimal number.
See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/report
### Collect resource logs and platform metrics Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics though, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting can be used to also send the platform metrics for most resources to the same workspace, which allows you to analyze metric data using log queries with other collected data.
-There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) for details on optimizing the cost of your log collection.
+There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection.
See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-in-azure-portal) to create a diagnostic setting for an Azure resource.
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
A core goal of your monitoring strategy will be minimizing costs. Some data coll
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)-- [Manage usage and costs for Application Insights](app/pricing.md)+ ## Define strategy Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to leverage the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus-
## Next steps
-For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Manage your usage and estimate costs](../logs/manage-cost-storage.md).
+For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
To alert on what matters, Container insights includes the following metric alert
|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.| |**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. | |Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
-| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) |
+| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. | |**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.| |Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
The output will show results similar to the following:
![Log query results of data ingestion volume](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
-Further information on how to monitor data usage and analyze cost is available in [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md).
+Further information on how to analyze usage is available in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
## Next steps
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
For details on when billing will be enabled for custom metrics and metrics queri
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../app/pricing.md#pricing-model) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
+> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../usage-estimated-costs.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
## How to send custom metrics
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 02/08/2022 Last updated : 03/03/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table. - ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BackendDuration|Yes|Duration of Backend Requests|Milliseconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
-|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service|Location|
-|Duration|Yes|Overall Duration of Gateway Requests|Milliseconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
+|BackendDuration|Yes|Duration of Backend Requests|MilliSeconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
+|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service. Note: For skus other than Premium, 'Max' aggregation will show the value as 0.|Location|
+|ConnectionAttempts|Yes|WebSocket Connection Attempts (Preview)|Count|Total|Count of WebSocket connection attempts based on selected source and destination|Location, Source, Destination, State|
+|Duration|Yes|Overall Duration of Gateway Requests|MilliSeconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
|EventHubDroppedEvents|Yes|Dropped EventHub Events|Count|Total|Number of events skipped because of queue size limit reached|Location| |EventHubRejectedEvents|Yes|Rejected EventHub Events|Count|Total|Number of rejected EventHub events (wrong configuration or unauthorized)|Location| |EventHubSuccessfulEvents|Yes|Successful EventHub Events|Count|Total|Number of successful EventHub events|Location|
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessfulRequests|Yes|Successful Gateway Requests (Deprecated)|Count|Total|Number of successful gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |TotalRequests|Yes|Total Gateway Requests (Deprecated)|Count|Total|Number of gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |UnauthorizedRequests|Yes|Unauthorized Gateway Requests (Deprecated)|Count|Total|Number of unauthorized gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname|
+|WebSocketMessages|Yes|WebSocket Messages (Preview)|Count|Total|Count of WebSocket messages based on selected source and destination|Location, Source, Destination|
## Microsoft.AppConfiguration/configurationStores
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestHandled|Yes|Handled Requests|Count|Total|Handled Requests|Node| |StorageUsage|Yes|Storage Usage|Bytes|Average|Storage Usage|Node|
-## Microsoft.BotService/botServices
+## microsoft.botservice/botservices
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|RequestLatency|Yes|Requests Latencies|Milliseconds|Average|How long it takes to get request response|Operation, Authentication, Protocol, ResourceId, Region|
-|RequestsTraffic|Yes|Requests Traffic|Count|Average|Number of requests within a given period of time|Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText|
+|RequestLatency|Yes|Request Latency|Milliseconds|Total|Time taken by the server to process the request|Operation, Authentication, Protocol, DataCenter|
+|RequestsTraffic|Yes|Requests Traffic|Percent|Count|Number of Requests Made|Operation, Authentication, Protocol, StatusCode, StatusCodeClass, DataCenter|
+ ## Microsoft.Cache/redis
This latest update adds a new column and reorders the metrics to be alphabetical
|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
+|allConnectionsClosedPerSecond|Yes|Connections Closed Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
+|allConnectionsCreatedPerSecond|Yes|Connections Created Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary| |allgetcommands|Yes|Gets (Instance Based)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
This latest update adds a new column and reorders the metrics to be alphabetical
|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| + ## Microsoft.Cache/redisEnterprise |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|cachemisses|Yes|Cache Misses|Count|Total||InstanceId| |cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||InstanceId| |cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||InstanceId|
-|CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region|
-|CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region|
|connectedclients|Yes|Connected Clients|Count|Maximum||InstanceId| |errors|Yes|Errors|Count|Maximum||InstanceId, ErrorType| |evictedkeys|Yes|Evicted Keys|Count|Total||No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions| |Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication| - ## Microsoft.Cloudtest/hostedpools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRead|No|TotalRead|BytesPerSecond|Average|The total lustre file system read per second|filesystem_name, category, system| |TotalWrite|No|TotalWrite|BytesPerSecond|Average|The total lustre file system write per second|filesystem_name, category, system| - ## Microsoft.CognitiveServices/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AudioSecondsTranscribed|Yes|Audio Seconds Transcribed|Count|Total|Number of seconds transcribed|ApiName, FeatureName, UsageChannel, Region|
+|AudioSecondsTranslated|Yes|Audio Seconds Translated|Count|Total|Number of seconds translated|ApiName, FeatureName, UsageChannel, Region|
|BlockedCalls|Yes|Blocked Calls|Count|Total|Number of calls that exceeded rate or quota limit.|ApiName, OperationName, Region| |CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region| |CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region| |ClientErrors|Yes|Client Errors|Count|Total|Number of calls with client side error (HTTP response code 4xx).|ApiName, OperationName, Region|
+|ComputerVisionTransactions|Yes|Computer Vision Transactions|Count|Total|Number of Computer Vision Transactions|ApiName, FeatureName, UsageChannel, Region|
+|CustomVisionTrainingTime|Yes|Custom Vision Training Time|Seconds|Total|Custom Vision training time|ApiName, FeatureName, UsageChannel, Region|
+|CustomVisionTransactions|Yes|Custom Vision Transactions|Count|Total|Number of Custom Vision prediction transactions|ApiName, FeatureName, UsageChannel, Region|
|DataIn|Yes|Data In|Bytes|Total|Size of incoming data in bytes.|ApiName, OperationName, Region| |DataOut|Yes|Data Out|Bytes|Total|Size of outgoing data in bytes.|ApiName, OperationName, Region|
+|DocumentCharactersTranslated|Yes|Document Characters Translated|Count|Total|Number of characters in document translation request.|ApiName, FeatureName, UsageChannel, Region|
+|DocumentCustomCharactersTranslated|Yes|Document Custom Characters Translated|Count|Total|Number of characters in custom document translation request.|ApiName, FeatureName, UsageChannel, Region|
+|FaceImagesTrained|Yes|Face Images Trained|Count|Total|Number of images trained. 1,000 images trained per transaction.|ApiName, FeatureName, UsageChannel, Region|
+|FacesStored|Yes|Faces Stored|Count|Total|Number of faces stored, prorated daily. The number of faces stored is reported daily.|ApiName, FeatureName, UsageChannel, Region|
+|FaceTransactions|Yes|Face Transactions|Count|Total|Number of API calls made to Face service|ApiName, FeatureName, UsageChannel, Region|
+|ImagesStored|Yes|Images Stored|Count|Total|Number of Custom Vision images stored.|ApiName, FeatureName, UsageChannel, Region|
|Latency|Yes|Latency|MilliSeconds|Average|Latency in milliseconds.|ApiName, OperationName, Region| |LearnedEvents|Yes|Learned Events|Count|Total|Number of Learned Events.|IsMatchBaseline, Mode, RunId|
-|MatchedRewards|Yes|Matched Rewards|Count|Total| Number of Matched Rewards.|Mode, RunId|
+|LUISSpeechRequests|Yes|LUIS Speech Requests|Count|Total|Number of LUIS speech to intent understanding requests|ApiName, FeatureName, UsageChannel, Region|
+|LUISTextRequests|Yes|LUIS Text Requests|Count|Total|Number of LUIS text requests|ApiName, FeatureName, UsageChannel, Region|
+|MatchedRewards|Yes|Matched Rewards|Count|Total|Number of Matched Rewards.|Mode, RunId|
+|NumberofSpeakerProfiles|Yes|Number of Speaker Profiles|Count|Total|Number of speaker profiles enrolled. Prorated hourly.|ApiName, FeatureName, UsageChannel, Region|
|ObservedRewards|Yes|Observed Rewards|Count|Total|Number of Observed Rewards.|Mode, RunId|
-|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters processed by Immersive Reader.|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedHealthTextRecords|Yes|Processed Health Text Records|Count|Total|Number of health text records processed|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedImages|Yes|Processed Images|Count|Total|Number of images processed|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedPages|Yes|Processed Pages|Count|Total|Number of pages processed|ApiName, FeatureName, UsageChannel, Region|
|ProcessedTextRecords|Yes|Processed Text Records|Count|Total|Count of Text Records.|ApiName, FeatureName, UsageChannel, Region| |ServerErrors|Yes|Server Errors|Count|Total|Number of calls with service internal error (HTTP response code 5xx).|ApiName, OperationName, Region|
-|SpeechSessionDuration|Yes|Speech Session Duration|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
+|SpeakerRecognitionTransactions|Yes|Speaker Recognition Transactions|Count|Total|Number of speaker recognition transactions|ApiName, FeatureName, UsageChannel, Region|
+|SpeechModelHostingHours|Yes|Speech Model Hosting Hours|Count|Total|Number of speech model hosting hours|ApiName, FeatureName, UsageChannel, Region|
+|SpeechSessionDuration|Yes|Speech Session Duration (Deprecated)|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
|SuccessfulCalls|Yes|Successful Calls|Count|Total|Number of successful calls.|ApiName, OperationName, Region|
+|SynthesizedCharacters|Yes|Synthesized Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
+|TextCharactersTranslated|Yes|Text Characters Translated|Count|Total|Number of characters in incoming text translation request.|ApiName, FeatureName, UsageChannel, Region|
+|TextCustomCharactersTranslated|Yes|Text Custom Characters Translated|Count|Total|Number of characters in incoming custom text translation request.|ApiName, FeatureName, UsageChannel, Region|
+|TextTrainedCharacters|Yes|Text Trained Characters|Count|Total|Number of characters trained using text translation.|ApiName, FeatureName, UsageChannel, Region|
|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls.|ApiName, OperationName, Region| |TotalErrors|Yes|Total Errors|Count|Total|Total number of calls with error response (HTTP response code 4xx or 5xx).|ApiName, OperationName, Region| |TotalTokenCalls|Yes|Total Token Calls|Count|Total|Total number of token calls.|ApiName, OperationName, Region|
-|TotalTransactions|Yes|Total Transactions|Count|Total|Total number of transactions.|No Dimensions|
+|TotalTransactions|Yes|Total Transactions (Deprecated)|Count|Total|Total number of transactions.|No Dimensions|
+|VoiceModelHostingHours|Yes|Voice Model Hosting Hours|Count|Total|Number of Hours.|ApiName, FeatureName, UsageChannel, Region|
+|VoiceModelTrainingMinutes|Yes|Voice Model Training Minutes|Count|Total|Number of Minutes.|ApiName, FeatureName, UsageChannel, Region|
## Microsoft.Communication/CommunicationServices
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |APIRequestAuthentication|No|Authentication API Requests|Count|Count|Count of all requests against the Communication Services Authentication endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestChat|Yes|Chat API Requests|Count|Count|Count of all requests against the Communication Services Chat endpoint.|Operation, StatusCode, StatusCodeClass|
+|APIRequestNetworkTraversal|No|Network Traversal API Requests|Count|Count|Count of all requests against the Communication Services Network Traversal endpoint.|Operation, StatusCode, StatusCodeClass|
|APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode|
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out Total|Yes|Network Out Total|Bytes|Total|The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic)|RoleInstanceId, RoleId| |Percentage CPU|Yes|Percentage CPU|Percent|Average|The percentage of allocated compute units that are currently in use by the Virtual Machine(s)|RoleInstanceId, RoleId| - ## microsoft.compute/disks |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|Bytes|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| |Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|Bytes|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| - ## Microsoft.Compute/virtualMachines |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|DataPipelineMessageCount|Yes|Data pipeline message count|Count|Total|The total number of messages sent to the MCVP data pipeline for storage.|VehicleId, DeviceName, IsSuccessful, FailureCategory| |ExtensionInvocationCount|Yes|Extension invocation count|Count|Total|Total number of times an extension was called.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory| |ExtensionInvocationRuntime|Yes|Extension invocation execution time|Milliseconds|Average|Average execution time spent inside an extension in milliseconds.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory|
+|MessagesInCount|Yes|Messages received count|Count|Total|The total number of vehicle-sourced publishes.|VehicleId, DeviceName, IsSuccessful, FailureCategory|
+|MessagesOutCount|Yes|Messages sent count|Count|Total|The total number of cloud-sourced publishes.|VehicleId, DeviceName, IsSuccessful, FailureCategory|
|ProvisionerServiceRequestRuntime|Yes|Vehicle provision execution time|Milliseconds|Average|The average execution time of vehicle provision requests in milliseconds|VehicleId, DeviceName, IsSuccessful, FailureCategory| |ProvisionerServiceRequests|Yes|Vehicle provision service requests|Count|Total|Total number of vehicle provision requests|VehicleId, DeviceName, IsSuccessful, FailureCategory| |StateStoreReadRequestLatency|Yes|State store read execution time|Milliseconds|Average|State store read request execution time average in milliseconds.|VehicleId, DeviceName, ExtensionName, IsSuccessful, FailureCategory|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AgentPoolCPUTime|Yes|AgentPool CPU Time|Seconds|Total|AgentPool CPU Time in seconds|No Dimensions|
-|RunDuration|Yes|Run Duration|Milliseconds|Total|Run Duration in milliseconds|No Dimensions|
-|SuccessfulPullCount|Yes|Successful Pull Count|Count|Average|Number of successful image pulls|No Dimensions|
-|SuccessfulPushCount|Yes|Successful Push Count|Count|Average|Number of successful image pushes|No Dimensions|
-|TotalPullCount|Yes|Total Pull Count|Count|Average|Number of image pulls in total|No Dimensions|
-|TotalPushCount|Yes|Total Push Count|Count|Average|Number of image pushes in total|No Dimensions|
+|RunDuration|Yes|Run Duration|MilliSeconds|Total|Run Duration in milliseconds|No Dimensions|
+|StorageUsed|Yes|Storage used|Bytes|Average|The amount of storage used by the container registry. For a registry account, it's the sum of capacity used by all the repositories within a registry. It's sum of capacity used by shared layers, manifest files, and replica copies in each of its repositories.|Geolocation|
+|SuccessfulPullCount|Yes|Successful Pull Count|Count|Total|Number of successful image pulls|No Dimensions|
+|SuccessfulPushCount|Yes|Successful Push Count|Count|Total|Number of successful image pushes|No Dimensions|
+|TotalPullCount|Yes|Total Pull Count|Count|Total|Number of image pulls in total|No Dimensions|
+|TotalPushCount|Yes|Total Push Count|Count|Total|Number of image pushes in total|No Dimensions|
## Microsoft.ContainerService/managedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|apiserver_current_inflight_requests|No|Inflight Requests|Count|Average|Maximum number of currently used inflight requests on the apiserver per request kind in the last second|requestKind|
+|cluster_autoscaler_cluster_safe_to_autoscale|No|Cluster Health|Count|Average|Determines whether or not cluster autoscaler will take action on the cluster|No Dimensions|
+|cluster_autoscaler_scale_down_in_cooldown|No|Scale Down Cooldown|Count|Average|Determines if the scale down is in cooldown - No nodes will be removed during this timeframe|No Dimensions|
+|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted|No Dimensions|
+|cluster_autoscaler_unschedulable_pods_count|No|Unschedulable Pods|Count|Average|Number of pods that are currently unschedulable in the cluster|No Dimensions|
|kube_node_status_allocatable_cpu_cores|No|Total number of available cpu cores in a managed cluster|Count|Average|Total number of available cpu cores in a managed cluster|No Dimensions| |kube_node_status_allocatable_memory_bytes|No|Total amount of available memory in a managed cluster|Bytes|Average|Total amount of available memory in a managed cluster|No Dimensions| |kube_node_status_condition|No|Statuses for various node conditions|Count|Average|Statuses for various node conditions|condition, status, status2, node| |kube_pod_status_phase|No|Number of pods by phase|Count|Average|Number of pods by phase|phase, namespace, pod| |kube_pod_status_ready|No|Number of pods in Ready state|Count|Average|Number of pods in Ready state|namespace, pod, condition|
+|node_cpu_usage_millicores|Yes|CPU Usage Millicores|MilliCores|Average|Aggregated measurement of CPU utilization in millicores across the cluster|node, nodepool|
+|node_cpu_usage_percentage|Yes|CPU Usage Percentage|Percent|Average|Aggregated average CPU utilization measured in percentage across the cluster|node, nodepool|
+|node_disk_usage_bytes|Yes|Disk Used Bytes|Bytes|Average|Disk space used in bytes by device|node, nodepool, device|
+|node_disk_usage_percentage|Yes|Disk Used Percentage|Percent|Average|Disk space used in percent by device|node, nodepool, device|
+|node_memory_rss_bytes|Yes|Memory RSS Bytes|Bytes|Average|Container RSS memory used in bytes|node, nodepool|
+|node_memory_rss_percentage|Yes|Memory RSS Percentage|Percent|Average|Container RSS memory used in percent|node, nodepool|
+|node_memory_working_set_bytes|Yes|Memory Working Set Bytes|Bytes|Average|Container working set memory used in bytes|node, nodepool|
+|node_memory_working_set_percentage|Yes|Memory Working Set Percentage|Percent|Average|Container working set memory used in percent|node, nodepool|
+|node_network_in_bytes|Yes|Network In Bytes|Bytes|Average|Network received bytes|node, nodepool|
+|node_network_out_bytes|Yes|Network Out Bytes|Bytes|Average|Network transmitted bytes|node, nodepool|
## Microsoft.CustomProviders/resourceproviders
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |active_connections|Yes|Active Connections|Count|Average|Active Connections|ServerName|
+|apps_reserved_memory_percent|Yes|Reserved Memory percent|Percent|Average|Percentage of Commit Memory Limit Reserved by Applications|ServerName|
|cpu_percent|Yes|CPU percent|Percent|Average|CPU percent|ServerName| |iops|Yes|IOPS|Count|Average|IO operations per second|ServerName| |memory_percent|Yes|Memory percent|Percent|Average|Memory percent|ServerName|
This latest update adds a new column and reorders the metrics to be alphabetical
|d2c.endpoints.egress.storage|Yes|Routing: messages delivered to storage|Count|Total|The number of times IoT Hub routing successfully delivered messages to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.blobs|Yes|Routing: blobs delivered to storage|Count|Total|The number of times IoT Hub routing delivered blobs to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.bytes|Yes|Routing: data delivered to storage|Bytes|Total|The amount of data (bytes) IoT Hub routing delivered to storage endpoints.|No Dimensions|
-|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
-|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
-|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped|Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
+|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
+|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned|Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule).|No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|devices.connectedDevices.allProtocol|Yes|Connected devices (deprecated) |Count|Total|Number of devices connected to your IoT hub|No Dimensions| |devices.totalDevices|Yes|Total devices (deprecated)|Count|Total|Number of devices registered to your IoT hub|No Dimensions| |EventGridDeliveries|Yes|Event Grid deliveries|Count|Total|The number of IoT Hub events published to Event Grid. Use the Result dimension for the number of successful and failed requests. EventType dimension shows the type of event (https://aka.ms/ioteventgrid).|Result, EventType|
-|EventGridLatency|Yes|Event Grid latency|Milliseconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
+|EventGridLatency|Yes|Event Grid latency|MilliSeconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
|jobs.cancelJob.failure|Yes|Failed job cancellations|Count|Total|The count of all failed calls to cancel a job.|No Dimensions| |jobs.cancelJob.success|Yes|Successful job cancellations|Count|Total|The count of all successful calls to cancel a job.|No Dimensions| |jobs.completed|Yes|Completed jobs|Count|Total|The count of all completed jobs.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|jobs.queryJobs.success|Yes|Successful job queries|Count|Total|The count of all successful calls to query jobs.|No Dimensions| |RoutingDataSizeInBytesDelivered|Yes|Routing Delivery Message Size in Bytes (preview)|Bytes|Total|The total size in bytes of messages delivered by IoT hub to an endpoint. You can use the EndpointName and EndpointType dimensions to view the size of the messages in bytes delivered to your different endpoints. The metric value increases for every message delivered, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, RoutingSource| |RoutingDeliveries|Yes|Routing Deliveries (preview)|Count|Total|The number of times IoT Hub attempted to deliver messages to all endpoints using routing. To see the number of successful or failed attempts, use the Result dimension. To see the reason of failure, like invalid, dropped, or orphaned, use the FailureReasonCategory dimension. You can also use the EndpointName and EndpointType dimensions to understand how many messages were delivered to your different endpoints. The metric value increases by one for each delivery attempt, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, FailureReasonCategory, Result, RoutingSource|
-|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
+|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
|totalDeviceCount|No|Total devices|Count|Average|Number of devices registered to your IoT hub|No Dimensions| |twinQueries.failure|Yes|Failed twin queries|Count|Total|The count of all failed twin queries.|No Dimensions| |twinQueries.resultSize|Yes|Twin queries result size|Bytes|Average|The average, min, and max of the result size of all successful twin queries.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BillingApiOperations|Yes|Billing API Operations|Count|Total|Billing metric for the count of all API requests made against the Azure Digital Twins service.|MeterId| |BillingMessagesProcessed|Yes|Billing Messages Processed|Count|Total|Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.|MeterId| |BillingQueryUnits|Yes|Billing Query Units|Count|Total|The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries.|MeterId|
+|DataHistoryRouting|Yes|Data History Messages Routed (preview)|Count|Total|The number of messages routed to a time series database.|EndpointType, Result|
+|DataHistoryRoutingFailureRate|Yes|Data History Routing Failure Rate (preview)|Percent|Average|The percentage of events that result in an error as they are routed from Azure Digital Twins to a time series database.|EndpointType|
+|DataHistoryRoutingLatency|Yes|Data History Routing Latency (preview)|Milliseconds|Average|Time elapsed between an event getting routed from Azure Digital Twins to when it is posted to a time series database.|EndpointType, Result|
|IngressEvents|Yes|Ingress Events|Count|Total|The number of incoming telemetry events into Azure Digital Twins.|Result| |IngressEventsFailureRate|Yes|Ingress Events Failure Rate|Percent|Average|The percentage of incoming telemetry events for which the service returns an internal error (500) response code.|No Dimensions| |IngressEventsLatency|Yes|Ingress Events Latency|Milliseconds|Average|The time from when an event arrives to when it is ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result.|Result|
This latest update adds a new column and reorders the metrics to be alphabetical
|TwinCount|Yes|Twin Count|Count|Total|Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the service limit for max number of twins allowed per instance.|No Dimensions|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/cassandraClusters
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|cassandra_cache_capacity|No|capacity|Bytes|Average|Cache capacity in bytes.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_entries|No|entries|Count|Average|Total number of cache entries.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_hit_rate|No|hit rate|Percent|Average|All time cache hit rate.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_hits|No|hits|Count|Total|Total number of cache hits.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_miss_latency_histogram|No|Cache miss latency histogram|Count|Total|Histogram of cache miss latency (in microseconds).|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_cache_miss_latency_p99|No|miss latency p99 (in microseconds)|Count|Average|p99 Latency of misses.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_requests|No|requests|Count|Total|Total number of cache requests.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_cache_size|No|size|Bytes|Average|Total size of occupied cache, in bytes.|cassandra_datacenter, cassandra_node, cache_name|
+|cassandra_client_auth_failure|No|auth failure|Count|Total|Number of failed client authentication requests.|cassandra_datacenter, cassandra_node|
+|cassandra_client_auth_success|No|auth success|Count|Total|Number of successful client authentication requests.|cassandra_datacenter, cassandra_node|
+|cassandra_client_request_condition_not_met|No|condition not met|Count|Total|Number of transaction preconditions did not match current values.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_contention_histogram|No|contention|Count|Total|How many contended reads/writes were encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_contention_histogram_p99|No|contention histogram p99|Count|Average|p99 How many contended writes were encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_failures|No|failures|Count|Total|Number of transaction failures encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_latency_histogram|No|Client request latency histogram|Count|Total|Histogram of client request latency (in microseconds).|cassandra_datacenter, cassandra_node, quantile, request_type|
+|cassandra_client_request_latency_p99|No|latency p99 (in microseconds)|Count|Average|p99 Latency.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_timeouts|No|timeouts|Count|Total|Number of timeouts encountered.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_client_request_unfinished_commit|No|unfinished commit|Count|Total|Number of transactions that were committed on write.|cassandra_datacenter, cassandra_node, request_type|
+|cassandra_commit_log_waiting_on_commit_latency_histogram|No|waiting on commit latency histogram|Count|Total|Histogram of the time spent waiting on CL fsync (in microseconds); for Periodic this is only occurs when the sync is lagging its sync interval.|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_cql_prepared_statements_executed|No|prepared statements executed|Count|Total|Number of prepared statements executed.|cassandra_datacenter, cassandra_node|
+|cassandra_cql_regular_statements_executed|No|regular statements executed|Count|Total|Number of non prepared statements executed.|cassandra_datacenter, cassandra_node|
+|cassandra_jvm_gc_count|No|gc count|Count|Average|Total number of collections that have occurred.|cassandra_datacenter, cassandra_node|
+|cassandra_jvm_gc_time|No|gc time|MilliSeconds|Average|Approximate accumulated collection elapsed time.|cassandra_datacenter, cassandra_node|
+|cassandra_table_all_memtables_live_data_size|No|all memtables live data size|Count|Average|Total amount of live data stored in the memtables (2i and pending flush memtables included) that resides off-heap, excluding any data structure overhead.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_all_memtables_off_heap_size|No|all memtables off heap size|Count|Average|Total amount of data stored in the memtables (2i and pending flush memtables included) that resides off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_disk_space_used|No|bloom filter disk space used|Bytes|Average|Disk space used by bloom filter (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_false_positives|No|bloom filter false positives|Count|Average|Number of false positives on table's bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_false_ratio|No|bloom filter false ratio|Percent|Average|False positive ratio of table's bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bloom_filter_off_heap_memory_used|No|bloom filter off-heap memory used|Count|Average|Off-heap memory used by bloom filter.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_bytes_flushed|No|bytes flushed|Bytes|Total|Total number of bytes flushed since server [re]start.|cassandra_datacenter, cassandra_node|
+|cassandra_table_cas_commit|No|cas commit (in microseconds)|Count|Total|Latency of paxos commit round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_commit_p99|No|cas commit p99 (in microseconds)|Count|Average|p99 Latency of paxos commit round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_prepare|No|cas prepare (in microseconds)|Count|Total|Latency of paxos prepare round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_prepare_p99|No|cas prepare p99 (in microseconds)|Count|Average|p99 Latency of paxos prepare round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_propose|No|cas propose (in microseconds)|Count|Total|Latency of paxos propose round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_cas_propose_p99|No|cas propose p99 (in microseconds)|Count|Average|p99 Latency of paxos propose round.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_col_update_time_delta_histogram|No|col update time delta|Count|Total|Column update time delta on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_col_update_time_delta_histogram_p99|No|col update time delta p99|Count|Average|p99 Column update time delta on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_compaction_bytes_written|No|compaction bytes written|Bytes|Total|Total number of bytes written by compaction since server [re]start.|cassandra_datacenter, cassandra_node|
+|cassandra_table_compression_metadata_off_heap_memory_used|No|compression metadata off heap memory used|Count|Average|Off-heap memory used by compression meta data.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_compression_ratio|No|compression ratio|Percent|Average|Current compression ratio for all SSTables.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_read_latency|No|coordinator read latency (in microseconds)|Count|Total|Coordinator read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_read_latency_p99|No|coordinator read latency p99 (in microseconds)|Count|Average|p99 Coordinator read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_scan_latency|No|coordinator scan latency (in microseconds)|Count|Total|Coordinator range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_coordinator_scan_latency_p99|No|coordinator scan latency p99 (in microseconds)|Count|Average|p99 Coordinator range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_dropped_mutations|No|dropped mutations|Count|Total|Number of dropped mutations on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_column_count_histogram|No|estimated column count|Count|Total|Estimated number of columns.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_column_count_histogram_p99|No|estimated column count p99|Count|Average|p99 Estimated number of columns.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_partition_count|No|estimated partition count|Count|Average|Approximate number of keys in table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_estimated_partition_size_histogram|No|estimated partition size histogram|Bytes|Total|Histogram of estimated partition size.|cassandra_datacenter, cassandra_node, quantile|
+|cassandra_table_estimated_partition_size_histogram_p99|No|estimated partition size p99|Bytes|Average|p99 Estimated partition size (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_index_summary_off_heap_memory_used|No|index summary off heap memory used|Count|Average|Off-heap memory used by index summary.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_key_cache_hit_rate|No|key cache hit rate|Percent|Average|Key cache hit rate for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_disk_space_used|No|live disk space used|Bytes|Total|Disk space used by SSTables belonging to this table (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_scanned_histogram|No|live scanned|Count|Total|Live cells scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_scanned_histogram_p99|No|live scanned p99|Count|Average|p99 Live cells scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_live_sstable_count|No|live sstable count|Count|Average|Number of SSTables on disk for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_max_partition_size|No|max partition size|Bytes|Average|Size of the largest compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_mean_partition_size|No|mean partition size|Bytes|Average|Size of the average compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_columns_count|No|memtable columns count|Count|Average|Total number of columns present in the memtable.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_off_heap_size|No|memtable off heap size|Count|Average|Total amount of data stored in the memtable that resides off-heap, including column related overhead and partitions overwritten.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_on_heap_size|No|memtable on heap size|Count|Average|Total amount of data stored in the memtable that resides on-heap, including column related overhead and partitions overwritten.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_memtable_switch_count|No|memtable switch count|Count|Total|Number of times flush has resulted in the memtable being switched out.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_min_partition_size|No|min partition size|Bytes|Average|Size of the smallest compacted partition (in bytes).|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_pending_compactions|No|pending compactions|Count|Average|Estimate of number of pending compactions for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_pending_flushes|No|pending flushes|Count|Total|Estimated number of flush tasks pending for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_percent_repaired|No|percent repaired|Percent|Average|Percent of table data that is repaired on disk.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_range_latency|No|range latency (in microseconds)|Count|Total|Local range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_range_latency_p99|No|range latency p99 (in microseconds)|Count|Average|p99 Local range scan latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_read_latency|No|read latency (in microseconds)|Count|Total|Local read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_read_latency_p99|No|read latency p99 (in microseconds)|Count|Average|p99 Local read latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_row_cache_hit|No|row cache hit|Count|Total|Number of table row cache hits.|cassandra_datacenter, cassandra_node|
+|cassandra_table_row_cache_hit_out_of_range|No|row cache hit out of range|Count|Total|Number of table row cache hits that do not satisfy the query filter, thus went to disk.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_row_cache_miss|No|row cache miss|Count|Total|Number of table row cache misses.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_speculative_retries|No|speculative retries|Count|Total|Number of times speculative retries were sent for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_sstables_per_read_histogram|No|sstables per read|Count|Total|Number of sstable data files accessed per single partition read. SSTables skipped due to Bloom Filters, min-max key or partition index lookup are not taken into account.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_sstables_per_read_histogram_p99|No|sstables per read p99|Count|Average|p99 Number of sstable data files accessed per single partition read. SSTables skipped due to Bloom Filters, min-max key or partition index lookup are not taken into account.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_tombstone_scanned_histogram|No|tombstone scanned|Count|Total|Tombstones scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_tombstone_scanned_histogram_p99|No|tombstone scanned p99|Count|Average|p99 Tombstones scanned in queries on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_total_disk_space_used|No|total disk space used|Count|Total|Total disk space used by SSTables belonging to this table, including obsolete ones waiting to be GC'd.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_lock_acquire_time|No|view lock acquire time|Count|Total|Time taken acquiring a partition lock for materialized view updates on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_lock_acquire_time_p99|No|view lock acquire time p99|Count|Average|p99 Time taken acquiring a partition lock for materialized view updates on this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_read_time|No|view read time|Count|Total|Time taken during the local read of a materialized view update.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_view_read_time_p99|No|view read time p99|Count|Average|p99 Time taken during the local read of a materialized view update.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_waiting_on_free_memtable_space|No|waiting on free memtable space|Count|Total|Time spent waiting for free memtable space, either on- or off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_waiting_on_free_memtable_space_p99|No|waiting on free memtable space p99|Count|Average|p99 Time spent waiting for free memtable space, either on- or off-heap.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_write_latency|No|write latency (in microseconds)|Count|Total|Local write latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_table_write_latency_p99|No|write latency p99 (in microseconds)|Count|Average|p99 Local write latency for this table.|cassandra_datacenter, cassandra_node, table, keyspace|
+|cassandra_thread_pools_active_tasks|No|active tasks|Count|Average|Number of tasks being actively worked on by this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_currently_blocked_tasks|No|currently blocked tasks|Count|Total|Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_max_pool_size|No|max pool size|Count|Average|The maximum number of threads in this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_pending_tasks|No|pending tasks|Count|Average|Number of queued tasks queued up on this pool.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
+|cassandra_thread_pools_total_blocked_tasks|No|total blocked tasks|Count|Total|Number of tasks that were blocked due to queue saturation.|cassandra_datacenter, cassandra_node, pool_name, pool_type|
++
+## Microsoft.DocumentDB/DatabaseAccounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc [https://docs.microsoft.com/azure/cosmos-db/concepts-limits](../../cosmos-db/concepts-limits.md). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
This latest update adds a new column and reorders the metrics to be alphabetical
|DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, MetricType| |DedicatedGatewayAverageMemoryUsage|No|DedicatedGatewayAverageMemoryUsage|Bytes|Average|Average memory usage across dedicated gateway instances, which is used for both routing requests and caching data|Region| |DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType|
-|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region|
+|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region, CacheHit|
|DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes, 1 hour and 1 day granularity|CollectionName, DatabaseName, Region| |DocumentQuota|No|Document Quota|Bytes|Total|Total storage quota reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|GremlinGraphDelete|No|Gremlin Graph Deleted|Count|Count|Gremlin Graph Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType| |GremlinGraphThroughputUpdate|No|Gremlin Graph Throughput Updated|Count|Count|Gremlin Graph Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest| |GremlinGraphUpdate|No|Gremlin Graph Updated|Count|Count|Gremlin Graph Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|GremlinRequestCharges|No|Gremlin Request Charges|Count|Total|Request Units consumed for Gremlin requests made|APIType, DatabaseName, CollectionName, Region|
+|GremlinRequests|No|Gremlin Requests|Count|Count|Number of Gremlin requests made|APIType, DatabaseName, CollectionName, Region, ErrorCode|
|IndexUsage|No|Index Usage|Bytes|Total|Total index usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |IntegratedCacheEvictedEntriesSize|No|IntegratedCacheEvictedEntriesSize|Bytes|Average|Size of the entries evicted from the integrated cache|Region| |IntegratedCacheItemExpirationCount|No|IntegratedCacheItemExpirationCount|Count|Average|Number of items evicted from the integrated cache due to TTL expiration|Region, CacheEntryType|
This latest update adds a new column and reorders the metrics to be alphabetical
|MongoDBDatabaseUpdate|No|Mongo Database Updated|Count|Count|Mongo Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |MongoRequestCharge|Yes|Mongo Request Charge|Count|Total|Mongo Request Units Consumed|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |MongoRequests|Yes|Mongo Requests|Count|Count|Number of Mongo Requests Made|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status|
-|MongoRequestsCount|No|(deprecated) Mongo Request Rate|CountPerSecond|Average|Mongo request Count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsDelete|No|(deprecated) Mongo Delete Request Rate|CountPerSecond|Average|Mongo Delete request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsInsert|No|(deprecated) Mongo Insert Request Rate|CountPerSecond|Average|Mongo Insert count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsQuery|No|(deprecated) Mongo Query Request Rate|CountPerSecond|Average|Mongo Query request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsUpdate|No|(deprecated) Mongo Update Request Rate|CountPerSecond|Average|Mongo Update request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId|
+|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId, CollectionRid|
|ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions| |RemoveRegion|Yes|Region Removed|Count|Count|Region Removed|Region| |ReplicationLatency|Yes|P99 Replication Latency|MilliSeconds|Average|P99 Replication Latency across source and target regions for geo-enabled account|SourceRegion, TargetRegion| |ServerSideLatency|No|Server Side Latency|MilliSeconds|Average|Server Side Latency|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType|
+|ServerSideLatencyDirect|No|Server Side Latency Direct|MilliSeconds|Average|Server Side Latency in Direct Connection Mode|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType, APIType|
+|ServerSideLatencyGateway|No|Server Side Latency Gateway|MilliSeconds|Average|Server Side Latency in Gateway Connection Mode|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType, APIType|
|ServiceAvailability|No|Service Availability|Percent|Average|Account requests availability at one hour, day or month granularity|No Dimensions| |SqlContainerCreate|No|Sql Container Created|Count|Count|Sql Container Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |SqlContainerDelete|No|Sql Container Deleted|Count|Count|Sql Container Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
This latest update adds a new column and reorders the metrics to be alphabetical
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DeadLetterReason| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, Error, ErrorType| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|Topic, EventSubscriptionName, DomainEventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|Topic, EventSubscriptionName, DomainEventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DropReason| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName| |PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|Topic, ErrorType, Error| |PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|Topic|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
## Microsoft.EventGrid/eventSubscriptions
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
+|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this partner namespace|ErrorType, Error|
+|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this partner namespace|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
+|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the partner topics|No Dimensions|
## Microsoft.EventGrid/partnerTopics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|EventSubscriptionName|
+|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this partner topic.|EventSubscriptionName|
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
+|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this partner topic|No Dimensions|
+|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this partner topic|No Dimensions|
## Microsoft.EventGrid/systemTopics
This latest update adds a new column and reorders the metrics to be alphabetical
|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName| |DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName| |DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
+|DestinationProcessingDurationInMs|No|Destination Processing Duration|MilliSeconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName| |MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName| |PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error| |PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
+|PublishSuccessLatencyInMs|Yes|Publish Success Latency|MilliSeconds|Total|Publish success latency in milliseconds|No Dimensions|
|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|No Dimensions|
-## Microsoft.EventHub/namespaces
+## Microsoft.EventHub/Namespaces
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|INREQS|Yes|Incoming Requests (Deprecated)|Count|Total|Total incoming send requests for a namespace (Deprecated)|No Dimensions| |INTERR|Yes|Internal Server Errors (Deprecated)|Count|Total|Total internal server errors for a namespace (Deprecated)|No Dimensions| |MISCERR|Yes|Other Errors (Deprecated)|Count|Total|Total failed requests for a namespace (Deprecated)|No Dimensions|
-|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
-|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|Replica|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|Replica|
|OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|EntityName| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|EntityName| |OUTMSGS|Yes|Outgoing Messages (obsolete) (Deprecated)|Count|Total|Total outgoing messages for a namespace. This metric is deprecated. Please use Outgoing Messages metric instead (Deprecated)|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, |
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, |
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, OperationResult|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, OperationResult|
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, |
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, OperationResult|
|SUCCREQ|Yes|Successful Requests (Deprecated)|Count|Total|Total successful requests for a namespace (Deprecated)|No Dimensions| |SVRBSY|Yes|Server Busy Errors (Deprecated)|Count|Total|Total server busy errors for a namespace (Deprecated)|No Dimensions|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, |
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, |
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, OperationResult|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, OperationResult|
## Microsoft.HDInsight/clusters
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
+## Microsoft.HealthcareApis/workspaces/fhirservices
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Availability|Yes|Availability|Percent|Average|The availability rate of the service.|No Dimensions|
+|TotalDataSize|Yes|Total Data Size|Bytes|Total|Total size of the data in the backing database, in bytes.|No Dimensions|
+|TotalErrors|Yes|Total Errors|Count|Sum|The total number of internal server errors encountered by the service.|Protocol, StatusCode, StatusCodeClass, StatusCodeText|
+|TotalLatency|Yes|Total Latency|Milliseconds|Average|The response latency of the service.|Protocol|
+|TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
++ ## Microsoft.HealthcareApis/workspaces/iotconnectors |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |DeviceEvent|Yes|Number of Incoming Messages|Count|Sum|The total number of messages received by the Azure IoT Connector for FHIR prior to any normalization.|Operation, ResourceName| |DeviceEventProcessingLatencyMs|Yes|Average Normalize Stage Latency|Milliseconds|Average|The average time between an event's ingestion time and the time the event is processed for normalization.|Operation, ResourceName|
+|IotConnectorStatus|Yes|IotConnector Health Status|Percent|Average|Health checks which indicate the overall health of the IoT Connector.|Operation, ResourceName, HealthCheckName|
|Measurement|Yes|Number of Measurements|Count|Sum|The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR.|Operation, ResourceName| |MeasurementGroup|Yes|Number of Message Groups|Count|Sum|The total number of unique groupings of measurements across type, device, patient, and configured time period generated by the FHIR conversion stage.|Operation, ResourceName| |MeasurementIngestionLatencyMs|Yes|Average Group Stage Latency|Milliseconds|Average|The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage.|Operation, ResourceName| |NormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ResourceName| |TotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ResourceName| - ## microsoft.hybridnetwork/networkfunctions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName| - ## microsoft.insights/autoscalesettings |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|provisionedDeviceCount|No|Total Provisioned Devices|Count|Average|Number of devices provisioned in IoT Central application|No Dimensions|
-## Microsoft.KeyVault/managedHSMs
+## microsoft.keyvault/managedhsms
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Availability|No|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|Availability|No|Overall Service Availability|Percent|Average|Service requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName| |ServiceApiLatency|No|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
-|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Gets the available metrics for a Managed HSM pool|ActivityType, ActivityName, StatusCode, StatusCodeClass|
## Microsoft.KeyVault/vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass| |SaturationShoebox|No|Overall Vault Saturation|Percent|Average|Vault capacity used|ActivityType, ActivityName, TransactionType| |ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName|
-|ServiceApiLatency|Yes|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|ServiceApiLatency|Yes|Overall Service Api Latency|MilliSeconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Number of total service api results|ActivityType, ActivityName, StatusCode, StatusCodeClass| - ## microsoft.kubernetes/connectedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |capacity_cpu_cores|Yes|Total number of cpu cores in a connected cluster|Count|Total|Total number of cpu cores in a connected cluster|No Dimensions| - ## Microsoft.Kusto/Clusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/Workflows
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|CpuUtilization|Yes|CpuUtilization|Count|Average|Percentage of utilization on a CPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, ClusterName| |CpuUtilizationMillicores|Yes|CpuUtilizationMillicores|Count|Average|Utilization of a CPU node in millicores. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName| |CpuUtilizationPercentage|Yes|CpuUtilizationPercentage|Count|Average|Utilization percentage of a CPU node. Utilization is aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskAvailMegabytes|Yes|DiskAvailMegabytes|Count|Average|Available disk space in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskReadMegabytes|Yes|DiskReadMegabytes|Count|Average|Data read from disk in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskUsedMegabytes|Yes|DiskUsedMegabytes|Count|Average|Used disk space in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|DiskWriteMegabytes|Yes|DiskWriteMegabytes|Count|Average|Data written into disk in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
|Errors|Yes|Errors|Count|Total|Number of run errors in this workspace. Count is updated whenever run encounters an error.|Scenario| |Failed Runs|Yes|Failed Runs|Count|Total|Number of runs failed for this workspace. Count is updated when a run fails.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Finalizing Runs|Yes|Finalizing Runs|Count|Total|Number of runs entered finalizing state for this workspace. Count is updated when a run has completed but output collection still in progress.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName|
This latest update adds a new column and reorders the metrics to be alphabetical
|ContentKeyPolicyCount|Yes|Content Key Policy count|Count|Average|How many content key policies are already created in current media service account|No Dimensions| |ContentKeyPolicyQuota|Yes|Content Key Policy quota|Count|Average|How many content key polices are allowed for current media service account|No Dimensions| |ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions|
+|JobQuota|Yes|Job quota|Count|Average|The Job quota for the current media service account.|No Dimensions|
+|JobsScheduled|Yes|Jobs Scheduled|Count|Average|The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted.|No Dimensions|
|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Average|The maximum number of live events allowed in the current media services account|No Dimensions| |MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Average|The maximum number of running live events allowed in the current media services account|No Dimensions| |RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions| |StreamingPolicyCount|Yes|Streaming Policy count|Count|Average|How many streaming policies are already created in current media service account|No Dimensions| |StreamingPolicyQuota|Yes|Streaming Policy quota|Count|Average|How many streaming policies are allowed for current media service account|No Dimensions| |StreamingPolicyQuotaUsedPercentage|Yes|Streaming Policy quota used percentage|Percent|Average|Streaming Policy used percentage in current media service account|No Dimensions|
+|TransformQuota|Yes|Transform quota|Count|Average|The Transform quota for the current media service account.|No Dimensions|
## Microsoft.Media/mediaservices/liveEvents
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|IngressBytes|Yes|Ingress Bytes|Bytes|Total|The number of bytes ingressed by the pipeline node.|PipelineTopology, Pipeline, Node|
+|IngressBytes|Yes|Ingress Bytes|Bytes|Total|The number of bytes ingressed by the pipeline node.|PipelineKind, PipelineTopology, Pipeline, Node|
+|Pipelines|Yes|Pipelines|Count|Total|The number of pipelines of each kind and state|PipelineKind, PipelineTopology, PipelineState|
## Microsoft.MixedReality/remoteRenderingAccounts
This latest update adds a new column and reorders the metrics to be alphabetical
|XregionReplicationTotalTransferBytes|Yes|Volume replication total transfer|Bytes|Average|Cumulative bytes transferred for the relationship.|No Dimensions|
-## Microsoft.Network/applicationGateways
+## Microsoft.Network/applicationgateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|BackendFirstByteResponseTime|No|Backend First Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header, approximating processing time of backend server|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendLastByteResponseTime|No|Backend Last Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendResponseStatus|Yes|Backend Response Status|Count|Total|The number of HTTP response codes generated by the backend members. This does not include any response codes generated by the Application Gateway.|BackendServer, BackendPool, BackendHttpSetting, HttpStatusGroup|
+|BackendTlsNegotiationError|Yes|Backend TLS Connection Errors|Count|Total|TLS Connection Errors for Application Gateway Backend|BackendHttpSetting, BackendPool, ErrorType|
|BlockedCount|Yes|Web Application Firewall Blocked Requests Rule Distribution|Count|Total|Web Application Firewall blocked requests rule distribution|RuleGroup, RuleId| |BlockedReqCount|Yes|Web Application Firewall Blocked Requests Count|Count|Total|Web Application Firewall blocked requests count|No Dimensions| |BytesReceived|Yes|Bytes Received|Bytes|Total|The total number of bytes received by the Application Gateway from the clients|Listener|
This latest update adds a new column and reorders the metrics to be alphabetical
|HealthyHostCount|Yes|Healthy Host Count|Count|Average|Number of healthy backend hosts|BackendSettingsPool| |MatchedCount|Yes|Web Application Firewall Total Rule Distribution|Count|Total|Web Application Firewall Total Rule Distribution for the incoming traffic|RuleGroup, RuleId| |NewConnectionsPerSecond|No|New connections per second|CountPerSecond|Average|New connections per second established with Application Gateway|No Dimensions|
+|RejectedConnections|Yes|Rejected Connections|Count|Total|Count of rejected connections for Application Gateway Frontend|No Dimensions|
|ResponseStatus|Yes|Response Status|Count|Total|Http response status returned by Application Gateway|HttpStatusGroup| |Throughput|No|Throughput|BytesPerSecond|Average|Number of bytes per second the Application Gateway has served|No Dimensions| |TlsProtocol|Yes|Client TLS Protocol|Count|Total|The number of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol.|Listener, TlsProtocol|
This latest update adds a new column and reorders the metrics to be alphabetical
|UnhealthyHostCount|Yes|Unhealthy Host Count|Count|Average|Number of unhealthy backend hosts|BackendSettingsPool|
-## Microsoft.Network/azurefirewalls
+## Microsoft.Network/azureFirewalls
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|ByteCount|Yes|Byte Count|Bytes|Total|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |DipAvailability|Yes|Health Probe Status|Count|Average|Average Load Balancer health probe status per time duration|ProtocolType, BackendPort, FrontendIPAddress, FrontendPort, BackendIPAddress| |PacketCount|Yes|Packet Count|Count|Total|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |SnatConnectionCount|Yes|SNAT Connection Count|Count|Total|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState| |SYNCount|Yes|SYN Count|Count|Total|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|VipAvailability|Yes|Data Path Availability|Count|Average|Average Load Balancer data path availability per time duration|FrontendIPAddress, FrontendPort|
This latest update adds a new column and reorders the metrics to be alphabetical
|TestResult|Yes|Test Result|Count|Average|Connection monitor test result|SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, TestResultCriterion, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet|
-## Microsoft.Network/p2sVpnGateways
+## microsoft.network/p2svpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
## Microsoft.Network/privateDnsZones
This latest update adds a new column and reorders the metrics to be alphabetical
|ProbeAgentCurrentEndpointStateByProfileResourceId|Yes|Endpoint Status by Endpoint|Count|Maximum|1 if an endpoint's probe status is "Enabled", 0 otherwise.|EndpointName| |QpsByEndpoint|Yes|Queries by Endpoint Returned|Count|Total|Number of times a Traffic Manager endpoint was returned in the given time frame|EndpointName| - ## Microsoft.Network/virtualHubs |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype| -
-## Microsoft.Network/virtualNetworkGateways
+## microsoft.network/virtualnetworkgateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
+|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
-|P2SConnectionCount|Yes|P2S Connection Count|Count|Maximum|Point-to-site connection count of a gateway|Protocol, Instance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.Network/virtualNetworks
This latest update adds a new column and reorders the metrics to be alphabetical
|PeeringAvailability|Yes|Bgp Availability|Percent|Average|BGP Availability between VirtualRouter and remote peers|Peer|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processes|Yes|Processes|Count|Average|Average_Processes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|Average_Used MBytes Swap Space|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Users|Yes|Users|Count|Average|Average_Users|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Event|Yes|Event|Count|Average|Event|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
-|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat|Computer, OSType, Version, SourceComputerId|
-|Update|Yes|Update|Count|Average|Update|Computer, Product, Classification, UpdateState, Optional, Approved|
+|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
+|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, OSType, Version, SourceComputerId|
+|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, Product, Classification, UpdateState, Optional, Approved|
## Microsoft.Peering/peerings
This latest update adds a new column and reorders the metrics to be alphabetical
|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
-## Microsoft.Purview/accounts
+## microsoft.purview/accounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ScanBillingUnits|Yes|Scan Billing Units|Count|Total|Indicates the scan billing units.|ResourceId|
-|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|ResourceId|
-|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|ResourceId|
-|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|ResourceId|
-|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|ResourceId|
+|DataMapCapacityUnits|Yes|Data Map Capacity Units|Count|Total|Indicates Data Map Capacity Units.|No Dimensions|
+|DataMapStorageSize|Yes|Data Map Storage Size|Bytes|Total|Indicates the data map storage size.|No Dimensions|
+|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|No Dimensions|
+|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|No Dimensions|
+|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|No Dimensions|
+|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|No Dimensions|
## Microsoft.RecoveryServices/Vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|ActiveConnections|No|ActiveConnections|Count|Total|Total ActiveConnections for Microsoft.Relay.|EntityName| |ActiveListeners|No|ActiveListeners|Count|Total|Total ActiveListeners for Microsoft.Relay.|EntityName| |BytesTransferred|Yes|BytesTransferred|Bytes|Total|Total BytesTransferred for Microsoft.Relay.|EntityName|
-|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, |
-|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, |
-|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, |
+|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
+|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
+|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
|ListenerConnections-TotalRequests|No|ListenerConnections-TotalRequests|Count|Total|Total ListenerConnections for Microsoft.Relay.|EntityName| |ListenerDisconnects|No|ListenerDisconnects|Count|Total|Total ListenerDisconnects for Microsoft.Relay.|EntityName|
-|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, |
-|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, |
-|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, |
+|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
+|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
+|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
|SenderConnections-TotalRequests|No|SenderConnections-TotalRequests|Count|Total|Total SenderConnections requests for Microsoft.Relay.|EntityName| |SenderDisconnects|No|SenderDisconnects|Count|Total|Total SenderDisconnects for Microsoft.Relay.|EntityName| - ## microsoft.resources/subscriptions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Latency|No|Latency|Seconds|Average|Latency data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| |Traffic|No|Traffic|Count|Count|Traffic data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| - ## Microsoft.Search/searchServices |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|DocumentsProcessedCount|Yes|Document processed count|Count|Total|Number of documents processed|DataSourceName, Failed, IndexerName, IndexName, SkillsetName|
|SearchLatency|Yes|Search Latency|Seconds|Average|Average search latency for the search service|No Dimensions| |SearchQueriesPerSecond|Yes|Search queries per second|CountPerSecond|Average|Search queries per second for the search service|No Dimensions|
+|SkillExecutionCount|Yes|Skill execution invocation count|Count|Total|Number of skill executions|DataSourceName, Failed, IndexerName, SkillName, SkillsetName, SkillType|
|ThrottledSearchQueriesPercentage|Yes|Throttled search queries percentage|Percent|Average|Percentage of search queries that were throttled for the search service|No Dimensions|
-## Microsoft.ServiceBus/namespaces
+## Microsoft.ServiceBus/Namespaces
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AbandonMessage|Yes|Abandoned Messages|Count|Total|Abandoned Messages|EntityName|
+|AbandonMessage|Yes|Abandoned Messages|Count|Total|Count of messages abandoned on a Queue/Topic.|EntityName|
|ActiveConnections|No|ActiveConnections|Count|Total|Total Active Connections for Microsoft.ServiceBus.|No Dimensions| |ActiveMessages|No|Count of active messages in a Queue/Topic.|Count|Average|Count of active messages in a Queue/Topic.|EntityName|
-|CompleteMessage|Yes|Completed Messages|Count|Total|Completed Messages|EntityName|
+|CompleteMessage|Yes|Completed Messages|Count|Total|Count of messages completed on a Queue/Topic.|EntityName|
|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.ServiceBus.|EntityName| |ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.ServiceBus.|EntityName|
-|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|No Dimensions|
+|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|Replica|
|DeadletteredMessages|No|Count of dead-lettered messages in a Queue/Topic.|Count|Average|Count of dead-lettered messages in a Queue/Topic.|EntityName| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.ServiceBus.|EntityName| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.ServiceBus.|EntityName| |Messages|No|Count of messages in a Queue/Topic.|Count|Average|Count of messages in a Queue/Topic.|EntityName|
-|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
-|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|Service bus premium namespace CPU usage metric.|Replica|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Service bus premium namespace memory usage metric.|Replica|
|OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.ServiceBus.|EntityName| |PendingCheckpointOperationCount|No|Pending Checkpoint Operations Count.|Count|Total|Pending Checkpoint Operations Count.|No Dimensions| |ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, |
-|ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, |
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
-|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|No Dimensions|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, OperationResult|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, OperationResult, MessagingErrorSubCode|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
+|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|Replica|
## Microsoft.SignalRService/SignalR |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|ConnectionCloseCount|Yes|Connection Close Count|Count|Total|The count of connections closed by various reasons.|Endpoint, ConnectionCloseCategory|
|ConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|Endpoint|
+|ConnectionOpenCount|Yes|Connection Open Count|Count|Total|The count of new connections opened.|Endpoint|
|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions| |InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |MessageCount|Yes|Message Count|Count|Total|The total amount of messages.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions|
-|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
-|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
+|ConnectionCloseCount|Yes|Connection Close Count|Count|Total|The count of connections closed by various reasons.|ConnectionCloseCategory|
+|ConnectionOpenCount|Yes|Connection Open Count|Count|Total|The count of new connections opened.|No Dimensions|
+|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions|
+|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
## Microsoft.Sql/managedInstances
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ClientIOPS|Yes|Total Client IOPS|Count|Average|The rate of client file operations processed by the Cache.|No Dimensions|
-|ClientLatency|Yes|Average Client Latency|Milliseconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
+|ClientLatency|Yes|Average Client Latency|MilliSeconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
|ClientLockIOPS|Yes|Client Lock IOPS|CountPerSecond|Average|Client file locking operations per second.|No Dimensions| |ClientMetadataReadIOPS|Yes|Client Metadata Read IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data reads, that do not modify persistent state.|No Dimensions| |ClientMetadataWriteIOPS|Yes|Client Metadata Write IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data writes, that modify persistent state.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageTargetFreeWriteSpace|Yes|Storage Target Free Write Space|Bytes|Average|Write space available for dirty data associated with a storage target.|StorageTarget| |StorageTargetHealth|Yes|Storage Target Health|Count|Average|Boolean results of connectivity test between the Cache and Storage Targets.|No Dimensions| |StorageTargetIOPS|Yes|Total StorageTarget IOPS|Count|Average|The rate of all file operations the Cache sends to a particular StorageTarget.|StorageTarget|
-|StorageTargetLatency|Yes|StorageTarget Latency|Milliseconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
+|StorageTargetLatency|Yes|StorageTarget Latency|MilliSeconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
|StorageTargetMetadataReadIOPS|Yes|StorageTarget Metadata Read IOPS|CountPerSecond|Average|The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetMetadataWriteIOPS|Yes|StorageTarget Metadata Write IOPS|CountPerSecond|Average|The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetReadAheadThroughput|Yes|StorageTarget Read Ahead Throughput|BytesPerSecond|Average|The rate the Cache opportunisticly reads data from the StorageTarget.|StorageTarget|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket().|Instance|
-|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB).|Instance|
-|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance|
-|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance|
-|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
-|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance|
-|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions|
-|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
-|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units|Instance|
-|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
-|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process.|Instance|
-|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process.|Instance|
-|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status|Instance|
-|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101.|Instance|
-|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300.|Instance|
-|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400.|Instance|
-|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code.|Instance|
-|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code.|Instance|
-|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code.|Instance|
-|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code.|Instance|
-|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500.|Instance|
-|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600.|Instance|
-|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds.|Instance|
-|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations.|Instance|
-|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations.|Instance|
-|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations.|Instance|
-|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations.|Instance|
-|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations.|Instance|
-|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations.|Instance|
-|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB.|Instance|
-|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|Instance|
-|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code.|Instance|
-|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue.|Instance|
-|ScmCpuTime|Yes|ScmCpuTime|Seconds|Total|ScmCpuTime|Instance|
-|ScmPrivateBytes|Yes|ScmPrivateBytes|Bytes|Average|ScmPrivateBytes|Instance|
-|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process.|Instance|
-|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance|
-|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance|
+|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket(). For WebApps and FunctionApps.|Instance|
+|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB). For WebApps and FunctionApps.|Instance|
+|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage). For WebApps only.|Instance|
+|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application. For WebApps and FunctionApps.|Instance|
+|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app. For WebApps and FunctionApps.|No Dimensions|
+|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. For FunctionApps only.|Instance|
+|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units. For FunctionApps only.|Instance|
+|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
+|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process. For WebApps and FunctionApps.|Instance|
+|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process. For WebApps and FunctionApps.|Instance|
+|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status. For WebApps and FunctionApps.|Instance|
+|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101. For WebApps and FunctionApps.|Instance|
+|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300. For WebApps and FunctionApps.|Instance|
+|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400. For WebApps and FunctionApps.|Instance|
+|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code. For WebApps and FunctionApps.|Instance|
+|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code. For WebApps and FunctionApps.|Instance|
+|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code. For WebApps and FunctionApps.|Instance|
+|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code. For WebApps and FunctionApps.|Instance|
+|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500. For WebApps and FunctionApps.|Instance|
+|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600. For WebApps and FunctionApps.|Instance|
+|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
+|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. For WebApps and FunctionApps.|Instance|
+|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations. For WebApps and FunctionApps.|Instance|
+|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.|Instance|
+|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.|Instance|
+|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations. For WebApps and FunctionApps.|Instance|
+|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB. For WebApps and FunctionApps.|Instance|
+|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes. For WebApps and FunctionApps.|Instance|
+|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code. For WebApps and FunctionApps.|Instance|
+|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue. For WebApps and FunctionApps.|Instance|
+|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process. For WebApps and FunctionApps.|Instance|
+|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application. For WebApps and FunctionApps.|Instance|
+|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application. For WebApps and FunctionApps.|Instance|
## Microsoft.Web/sites/slots
This latest update adds a new column and reorders the metrics to be alphabetical
|CdnPercentageOf5XX|Yes|CdnPercentageOf5XX|Percent|Total|CdnPercentageOf5XX|Instance| |CdnRequestCount|Yes|CdnRequestCount|Count|Total|CdnRequestCount|Instance| |CdnResponseSize|Yes|CdnResponseSize|Bytes|Total|CdnResponseSize|Instance|
-|CdnTotalLatency|Yes|CdnTotalLatency|Seconds|Total|CdnTotalLatency|Instance|
+|CdnTotalLatency|Yes|CdnTotalLatency|MilliSeconds|Total|CdnTotalLatency|Instance|
|FunctionErrors|Yes|FunctionErrors|Count|Total|FunctionErrors|Instance| |FunctionHits|Yes|FunctionHits|Count|Total|FunctionHits|Instance| |SiteErrors|Yes|SiteErrors|Count|Total|SiteErrors|Instance| |SiteHits|Yes|SiteHits|Count|Total|SiteHits|Instance| - ## Wandisco.Fusion/migrators |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes|Yes|Total Migrated Data in Bytes|Bytes|Total|This provides a view of the successfully migrated Bytes for a given migrator|No Dimensions| |TotalTransactions|Yes|Total Transactions|Count|Total|This provides a running total of the Data Transactions for which the user could be billed.|No Dimensions| - ## Next steps - [Read about metrics in Azure Monitor](../data-platform.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 02/08/2022 Last updated : 03/03/2022
Some categories might be supported only for specific types of resources. See the
If you think something is missing, you can open a GitHub comment at the bottom of this article. -
-## Microsoft.AAD/domainServices
+## Microsoft.AAD/DomainServices
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |GatewayLogs|Logs related to ApiManagement Gateway|No|
+|WebSocketConnectionLogs|Logs related to Websocket Connections|Yes|
## Microsoft.AppConfiguration/configurationStores
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|AuditEvent message log category.|No|
+|AuditEvent|AuditEvent message log category.|No|
|ERR|Error message log category.|No|
+|ERR|Error message log category.|No|
+|INF|Informational message log category.|No|
|INF|Informational message log category.|No|
+|NotProcessed|Requests which could not be processed.|Yes|
+|Operational|Operational message log category.|Yes|
+|WRN|Warning message log category.|Yes|
|WRN|Warning message log category.|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|DscNodeStatus|Dsc Node Status|No|
-|JobLogs|Job Logs|No|
-|JobStreams|Job Streams|No|
+|AuditEvent|AuditEvent|Yes|
+|DscNodeStatus|DscNodeStatus|No|
+|JobLogs|JobLogs|No|
+|JobStreams|JobStreams|No|
## Microsoft.AutonomousDevelopmentPlatform/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|Request|Request|Yes|
+## Microsoft.AutonomousDevelopmentPlatform/datapools
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
+|Operational|Operational|Yes|
+|Request|Request|Yes|
++
+## Microsoft.AutonomousDevelopmentPlatform/workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
+|Operational|Operational|Yes|
+|Request|Request|Yes|
++ ## microsoft.avs/privateClouds |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|ServiceLog|Service Logs|No|
-## Microsoft.BatchAI/workspaces
-|Category|Category Display Name|Costs To Export|
-||||
-|BaiClusterEvent|BaiClusterEvent|No|
-|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
-|BaiJobEvent|BaiJobEvent|No|
+## Microsoft.BatchAI/workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BaiClusterEvent|BaiClusterEvent|No|
+|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
+|BaiJobEvent|BaiJobEvent|No|
## Microsoft.Blockchain/blockchainMembers
If you think something is missing, you can open a GitHub comment at the bottom o
|CallDiagnostics|Call Diagnostics Logs|Yes| |CallSummary|Call Summary Logs|Yes| |ChatOperational|Operational Chat Logs|No|
+|NetworkTraversalOperational|Operational Network Traversal Logs|Yes|
|SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |cloud-controller-manager|Kubernetes Cloud Controller Manager|Yes| |cluster-autoscaler|Kubernetes Cluster Autoscaler|No|
-|guard|Kubernetes Guard|No|
+|csi-azuredisk-controller|csi-azuredisk-controller|Yes|
+|csi-azurefile-controller|csi-azurefile-controller|Yes|
+|csi-snapshot-controller|csi-snapshot-controller|Yes|
+|guard|guard|No|
|kube-apiserver|Kubernetes API Server|No| |kube-audit|Kubernetes Audit|No| |kube-audit-admin|Kubernetes Audit Admin Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Operational|Operational events|No|
+## Microsoft.Dashboard/grafana
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GrafanaLoginEvents|Grafana Login Events|Yes|
++ ## Microsoft.Databricks/workspaces |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |Audit|Audit Logs|No|
+|JobInfo|Job Info Logs|Yes|
|Requests|Request Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AgentHealthStatus|AgentHealthStatus|No|
+|AgentHealthStatus|AgentHealthStatus|Yes|
+|Checkpoint|Checkpoint|Yes|
|Checkpoint|Checkpoint|No| |Connection|Connection|No|
+|Connection|Connection|Yes|
+|Error|Error|Yes|
|Error|Error|No| |HostRegistration|HostRegistration|No|
+|HostRegistration|HostRegistration|Yes|
+|Management|Management|Yes|
|Management|Management|No|
+|NetworkData|Network Data Logs|Yes|
+|SessionHostManagement|Session Host Management Activity Logs|Yes|
## Microsoft.DesktopVirtualization/scalingplans |Category|Category Display Name|Costs To Export| ||||
-|Autoscaling|Autoscaling logs|Yes|
+|Autoscale|Autoscale logs|Yes|
## Microsoft.DesktopVirtualization/workspaces
If you think something is missing, you can open a GitHub comment at the bottom o
|ResourceProviderOperation|ResourceProviderOperation|Yes|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/cassandraClusters
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraAudit|CassandraAudit|Yes|
+|CassandraLogs|CassandraLogs|Yes|
++
+## Microsoft.DocumentDB/DatabaseAccounts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|DataPlaneRequests|Data plane operations logs|No|
|DeliveryFailures|Delivery Failure Logs|No| |PublishFailures|Publish Failure Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|DeliveryFailures|Delivery Failure Logs|No|
+|DataPlaneRequests|Data plane operations logs|No|
|PublishFailures|Publish Failure Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|DataPlaneRequests|Data plane operations logs|No|
|DeliveryFailures|Delivery Failure Logs|No| |PublishFailures|Publish Failure Logs|No|
-## Microsoft.EventHub/namespaces
+## Microsoft.EventHub/Namespaces
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AuditLogs|FHIR Audit logs|Yes|
-## Microsoft.Insights/AutoscaleSettings
+## microsoft.insights/autoscalesettings
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AppTraces|Traces|No|
-## Microsoft.KeyVault/managedHSMs
+## microsoft.keyvault/managedhsms
|Category|Category Display Name|Costs To Export| ||||
-|AuditEvent|Audit Logs|No|
+|AuditEvent|Audit Event|No|
## Microsoft.KeyVault/vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|Audit Logs|No|
+|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
## Microsoft.Kusto/Clusters
If you think something is missing, you can open a GitHub comment at the bottom o
|IntegrationAccountTrackingEvents|Integration Account track events|No|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/IntegrationAccounts
+
+|Category|Category Display Name|Costs To Export|
+||||
+|IntegrationAccountTrackingEvents|Integration Account track events|No|
++
+## Microsoft.Logic/Workflows
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
+|AmlComputeClusterEvent|AmlComputeClusterEvent|No|
+|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No|
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
+|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
+|AmlComputeJobEvent|AmlComputeJobEvent|No|
|AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
+|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
+|ComputeInstanceEvent|ComputeInstanceEvent|Yes|
+|DataLabelChangeEvent|DataLabelChangeEvent|Yes|
+|DataLabelReadEvent|DataLabelReadEvent|Yes|
+|DataSetChangeEvent|DataSetChangeEvent|Yes|
+|DataSetReadEvent|DataSetReadEvent|Yes|
+|DataStoreChangeEvent|DataStoreChangeEvent|Yes|
+|DataStoreReadEvent|DataStoreReadEvent|Yes|
+|DeploymentEventACI|DeploymentEventACI|Yes|
+|DeploymentEventAKS|DeploymentEventAKS|Yes|
+|DeploymentReadEvent|DeploymentReadEvent|Yes|
+|EnvironmentChangeEvent|EnvironmentChangeEvent|Yes|
+|EnvironmentReadEvent|EnvironmentReadEvent|Yes|
+|InferencingOperationACI|InferencingOperationACI|Yes|
+|InferencingOperationAKS|InferencingOperationAKS|Yes|
+|ModelsActionEvent|ModelsActionEvent|Yes|
+|ModelsChangeEvent|ModelsChangeEvent|Yes|
+|ModelsReadEvent|ModelsReadEvent|Yes|
+|PipelineChangeEvent|PipelineChangeEvent|Yes|
+|PipelineReadEvent|PipelineReadEvent|Yes|
+|RunEvent|RunEvent|Yes|
+|RunReadEvent|RunReadEvent|Yes|
## Microsoft.Media/mediaservices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |KeyDeliveryRequests|Key Delivery Requests|No|
+|MediaAccount|Media Account Health Status|Yes|
## Microsoft.Media/videoanalyzers
If you think something is missing, you can open a GitHub comment at the bottom o
|Operational|Operational Logs|Yes|
-## Microsoft.Network/applicationGateways
+## Microsoft.Network/applicationgateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|ApplicationGatewayPerformanceLog|Application Gateway Performance Log|No|
-## Microsoft.Network/azurefirewalls
+## Microsoft.Network/azureFirewalls
|Category|Category Display Name|Costs To Export| ||||
+|AZFWApplicationRule|Azure Firewall Application Rule Hit|Yes|
+|AZFWApplicationRuleAggregation|Azure Firewall Network Rule Aggregation Hit|Yes|
+|AZFWDnsQuery|Azure Firewall Dns query Hit|Yes|
+|AZFWFqdnResolveFailure|Azure Firewall Fqdn Resolution Failure Hit|Yes|
+|AZFWIdpsSignature|Azure Firewall Idps Signature Hit|Yes|
+|AZFWNatRule|Azure Firewall Nat Rule Hit|Yes|
+|AZFWNatRuleAggregation|Azure Firewall Nat Rule Aggregation Hit|Yes|
+|AZFWNetworkRule|Azure Firewall Network Rule Hit|Yes|
+|AZFWNetworkRuleAggregation|Azure Firewall Application Rule Aggregation Hit|Yes|
+|AZFWThreatIntel|Azure Firewall ThreatIntel Hit|Yes|
|AzureFirewallApplicationRule|Azure Firewall Application Rule|No| |AzureFirewallDnsProxy|Azure Firewall DNS Proxy|No| |AzureFirewallNetworkRule|Azure Firewall Network Rule|No|
-## Microsoft.Network/bastionHosts
+## microsoft.network/bastionHosts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|NetworkSecurityGroupRuleCounter|Network Security Group Rule Counter|No|
-## Microsoft.Network/p2sVpnGateways
+## Microsoft.Network/networkSecurityPerimeters
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NSPInboundAccessAllowed|NSP Inbound Access Allowed.|Yes|
+|NSPInboundAccessDenied|NSP Inbound Access Denied.|Yes|
+|NSPOutboundAccessAllowed|NSP Outbound Access Allowed.|Yes|
+|NSPOutboundAccessDenied|NSP Outbound Access Denied.|Yes|
+|NSPOutboundAttempt|NSP Outbound Attempted.|Yes|
+|PrivateEndPointTraffic|Private Endpoint Traffic|Yes|
+|ResourceInboundAccessAllowed|Resource Inbound Access Allowed.|Yes|
+|ResourceInboundAccessDenied|Resource Inbound Access Denied|Yes|
+|ResourceOutboundAccessAllowed|Resource Outbound Access Allowed|Yes|
+|ResourceOutboundAccessDenied|Resource Outbound Access Denied|Yes|
++
+## microsoft.network/p2svpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|ProbeHealthStatusEvents|Traffic Manager Probe Health Results Event|No|
-## Microsoft.Network/virtualNetworkGateways
+## microsoft.network/virtualnetworkgateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|VMProtectionAlerts|VM protection alerts|No|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|OperationalLogs|Operational Logs|No|
+## Microsoft.OpenLogisticsPlatform/Workspaces
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SupplyChainEntityOperations|Supply Chain Entity Operations|Yes|
+|SupplyChainEventLogs|Supply Chain Event logs|Yes|
++ ## Microsoft.OperationalInsights/workspaces |Category|Category Display Name|Costs To Export| ||||
-|Audit|Audit Logs|No|
+|Audit|Audit|No|
## Microsoft.PowerBI/tenants
If you think something is missing, you can open a GitHub comment at the bottom o
|Engine|Engine|No|
-## Microsoft.Purview/accounts
+## microsoft.purview/accounts
|Category|Category Display Name|Costs To Export| ||||
-|ScanStatusLogEvent|ScanStatus|Yes|
+|DataSensitivityLogEvent|DataSensitivity|Yes|
+|ScanStatusLogEvent|ScanStatus|No|
+|Security|PurviewAccountAuditEvents|Yes|
## Microsoft.RecoveryServices/Vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |HybridConnectionsEvent|HybridConnections Events|No|
+|HybridConnectionsLogs|HybridConnectionsLogs|No|
## Microsoft.Search/searchServices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|Analytics|Analytics|Yes|
-|DataConnectors|Data Collection ΓÇô Connectors|Yes|
+|DataConnectors|Data Collection - Connectors|Yes|
-## Microsoft.ServiceBus/namespaces
+## Microsoft.ServiceBus/Namespaces
|Category|Category Display Name|Costs To Export| ||||
+|ApplicationMetricsLogs|Application Metrics Logs|Yes|
|OperationalLogs|Operational Logs|No|
+|RuntimeAuditLogs|Runtime Audit Logs|Yes|
|VNetAndIPFilteringLogs|VNet/IP Filtering Connection Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|AllLogs|Azure Web PubSub Service Logs.|Yes|
+|ConnectivityLogs|Connectivity logs for Azure Web PubSub Service.|Yes|
+|HttpRequestLogs|Http Request logs for Azure Web PubSub Service.|Yes|
+|MessagingLogs|Messaging logs for Azure Web PubSub Service.|Yes|
## microsoft.singularity/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|StorageWrite|StorageWrite|Yes|
+## Microsoft.StorageCache/caches
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AscCacheOperationEvent|HPC Cache operation event|Yes|
+|AscUpgradeEvent|HPC Cache upgrade event|Yes|
+|AscWarningEvent|HPC Cache warning|Yes|
++ ## Microsoft.StreamAnalytics/streamingjobs |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Management|Management|No|
-## microsoft.web/hostingenvironments
+## Microsoft.Web/hostingEnvironments
|Category|Category Display Name|Costs To Export| |||| |AppServiceEnvironmentPlatformLogs|App Service Environment Platform Logs|No|
-## microsoft.web/sites
+## Microsoft.Web/sites
|Category|Category Display Name|Costs To Export| ||||
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
+
+ Title: Analyze usage in Log Analytics workspace in Azure Monitor
+description: Methods and queries to analyze the data in your Log Analytics workspace to help you understand usage and potential cause for high usage.
++ Last updated : 03/24/2022+
+
+# Analyze usage in Log Analytics workspace
+Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data collected by each. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher than expected usage and also to predict your costs as you monitor additional resources and configure different Azure Monitor features.
+
+## Causes for higher than expected usage
+Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+
+ - Set of insights and services enabled and their configuration
+ - Number and type of monitored resources
+ - Volume of data collected from each monitored resource
+
+An unexpected increase in any of these factors can result in increased charges for data retention. The rest of this article provides methods for detecting such a situation and then analyzing collected data to identify and mitigate the source of the increased usage.
+
+## Usage analysis in Azure Monitor
+You should start your analysis with existing tools in Azure Monitor. These require no configuration and can often provide the information you require with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, you use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
+### Log Analytics Workspace Insights
+[Log Analytics Workspace Insights](log-analytics-workspace-insights-overview.md#usage-tab) provides you with a quick understanding of the data in your workspace including the following:
+
+- Data tables ingesting the most data volume in the main table
+- Top resources contributing data
+- Trend of data ingestion
+
+See the **Usage** tab for a breakdown of ingestion by solution and table. This can help you quickly identify the tables that contribute to the bulk of your data volume. It also shows trending of data collection over time to determine if data collection steadily increase over time or suddenly increased in response to a particular configuration change.
+
+Select **Additional Queries** for pre-built queries that help you further understand your data patterns.
+
+### Usage and Estimated Costs
+The *Data ingestion per solution* chart on the [Usage and Estimated Costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
++
+## Log queries
+You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data.
+
+- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there is an ingestion charge. Use this column to filter out non-billable data.
+- [_BilledSize](log-standard-columns.md#_billedsize) provides the size in bytes of the record.
+
+## Data volume by solution
+Analyze the amount of billable data collected by a particular service or solution. These queries use the [Usage](/azure/azure-monitor/reference/tables/usage) table that collects usage data for each table in the workspace.
++
+> [!NOTE]
+> The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When using the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
+
+**Billable data volume by solution over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
+| render columnchart
+```
+
+**Billable data volume by type over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
+| render columnchart
+```
+
+**Billable data volume by solution and type over the past month**
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000 by Solution, DataType
+| sort by Solution asc, DataType asc
+```
+
+**Billable data volume for specific events**
+If you find that a particular data type is collecting excessive data, you may want to analyze the data in that table to determine particular records that are increasing. This example filters particular event IDs in the `Event` table and then provides a count for each ID. You can modify this queries using the columns from other tables.
+
+```kusto
+Event
+| where TimeGenerated > startofday(ago(31d)) and TimeGenerated < startofday(now())
+| where EventID == 5145 or EventID == 5156
+| where _IsBillable == true
+| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d)
+```
+
+## Data volume by computer
+Analyze the amount of billable data collect from a virtual machine or set of virtual machines. The **Usage** table doesn't include information about data collected from virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this is only for analytics of data trends.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+
+**Billable data volume by computer**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize BillableDataBytes = sum(_BilledSize) by computerName
+| sort by BillableDataBytes desc nulls last
+```
+
+**Count of billable events by computer**
+
+```kusto
+find where TimeGenerated > ago(24h) project _IsBillable, Computer
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize eventCount = count() by computerName
+| sort by eventCount desc nulls last
+```
+
+## Data volume by Azure resource, resource group, or subscription
+Analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+
+**Billable data volume by resource ID**
+
+```kusto
+find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
+| sort by BillableDataBytes nulls last
+```
+
+**Billable data volume by resource group**
+
+```kusto
+find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
+| extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
+| summarize BillableDataBytes = sum(BillableDataBytes) by resourceGroup
+| sort by BillableDataBytes nulls last
+```
+
+It may be helpful to parse the **_ResourceId** :
+
+```Kusto
+| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
+ resourceGroup "/providers/" provider "/" resourceType "/" resourceName
+```
+
+**Billable data volume by subscription**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
+| where _IsBillable == true
+| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId
+| sort by BillableDataBytes nulls last
+```
+## Querying for common data types
+If you find that you have excessive billable data for a particular data type, then you may need to perform a query to analyze data in that table. The following queries provide samples for some common data types:
+
+**Security** solution
+
+```kusto
+SecurityEvent
+| summarize AggregatedValue = count() by EventID
+```
+
+**Log Management** solution
+
+```kusto
+Usage
+| where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true
+| summarize AggregatedValue = count() by DataType`
+```
+
+**Perf** data type
+
+```kusto
+Perf
+| summarize AggregatedValue = count() by CounterPath`
+```
+
+```kusto
+Perf
+| summarize AggregatedValue = count() by CounterName
+```
+
+**Event** data type
+
+```kusto
+Event
+| summarize AggregatedValue = count() by EventID
+```
+
+```kusto
+Event
+| summarize AggregatedValue = count() by EventLog, EventLevelName
+```
+
+**Syslog** data type
+
+```kusto
+Syslog
+| summarize AggregatedValue = count() by Facility, SeverityLevel
+```
+
+```kusto
+Syslog
+| summarize AggregatedValue = count() by ProcessName
+```
+
+**AzureDiagnostics** data type
+
+```kusto
+AzureDiagnostics
+| summarize AggregatedValue = count() by ResourceProvider, ResourceId
+```
+
+## Application insights data
+There are two approaches to investigating the amount of data collected for Application Insights, depending on whether you have a classic or workspace-based application. Use the `_BilledSize` property that is available on each ingested event for both workspace-based and classic resources. You can also use aggregated information in the [systemEvents](/azure/azure-monitor/reference/tables/appsystemevents) table for classic resources.
++
+> [!NOTE]
+> The queries in this section will work for both a workspace-based and classic Application Insights resource since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
++
+**Operations generate the most data volume in the last 30 days (workspace-based or classic)**
+
+```kusto
+dependencies
+| where timestamp >= startofday(ago(30d))
+| summarize sum(_BilledSize) by operation_Name
+| render barchart
+```
++
+**Data volume ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= ago(24h)
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
+| summarize sum(BillingTelemetrySizeInBytes)
+```
+
+**Data volume by type ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= startofday(ago(30d))
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
+| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d)
+| render barchart
+```
+
+**Count of event types ingested in the last 24 hours (classic)**
+
+```kusto
+systemEvents
+| where timestamp >= startofday(ago(30d))
+| where type == "Billing"
+| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
+| summarize count() by BillingTelemetryType, bin(timestamp, 1d)
+| render barchart
+```
++
+### Data volume trends for workspace-based resources
+To look at the data volume trends for [workspace-based Application Insights resources](../app/create-workspace-resource.md), use a query that includes all of the Application insights tables. The following queries use the [tables names specific to workspace-based resources](../app/apm-tables.md#table-schemas).
++
+**Data volume trends for all Application Insights resources in a workspace for the last week**
+
+```kusto
+union (AppAvailabilityResults),
+ (AppBrowserTimings),
+ (AppDependencies),
+ (AppExceptions),
+ (AppEvents),
+ (AppMetrics),
+ (AppPageViews),
+ (AppPerformanceCounters),
+ (AppRequests),
+ (AppSystemEvents),
+ (AppTraces)
+| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
+| summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d)
+| render areachart
+```
+
+**Data volume trends for a specific Application Insights resources in a workspace for the last week**
+
+```kusto
+union (AppAvailabilityResults),
+ (AppBrowserTimings),
+ (AppDependencies),
+ (AppExceptions),
+ (AppEvents),
+ (AppMetrics),
+ (AppPageViews),
+ (AppPerformanceCounters),
+ (AppRequests),
+ (AppSystemEvents),
+ (AppTraces)
+| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
+| where _ResourceId contains "<myAppInsightsResourceName>"
+| summarize sum(_BilledSize) by Type, bin(TimeGenerated, 1d)
+| render areachart
+```
+++
+## Understanding nodes sending data
+If you don't have excessive data from any particular source, you may have an excessive number of agents that are sending data.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
++
+**Count of agent nodes that are sending a heartbeat each day in the last month**
+
+```kusto
+Heartbeat
+| where TimeGenerated > startofday(ago(31d))
+| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
+| render timechart
+```
+
+**Count of nodes sending any data in the last 24 hours**
+
+```kusto
+find where TimeGenerated > ago(24h) project Computer
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize nodes = dcount(computerName)
+```
+
+**Data volume sent by each node in the last 24 hours**
+
+```kusto
+find where TimeGenerated > ago(24h) project _BilledSize, Computer
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
+```
+
+## Nodes billed by the legacy Per Node pricing tier
+The [legacy Per Node pricing tier](cost-logs.md#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending billed data types since some data types are free. In this case, use the leftmost field of the fully qualified domain name.
+
+The following queries return the count of computers with billed data per hour. The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query. If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the `where` clause.
+
+```kusto
+find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
+| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| where _IsBillable == true
+| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
+| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
+| sort by day asc
+```
+> [!NOTE]
+> There's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
+
+## Security and Automation node counts
+
+**Count of distinct security nodes**
+
+```kusto
+union
+(
+ Heartbeat
+ | where (Solutions has 'security' or Solutions has 'antimalware' or Solutions has 'securitycenter')
+ | project Computer
+),
+(
+ ProtectionStatus
+ | where Computer !in (Heartbeat | project Computer)
+ | project Computer
+)
+| distinct Computer
+| project lowComputer = tolower(Computer)
+| distinct lowComputer
+| count
+```
+
+**Number of distinct Automation nodes**
+
+```kusto
+ ConfigurationData
+ | where (ConfigDataType == "WindowsServices" or ConfigDataType == "Software" or ConfigDataType =="Daemons")
+ | extend lowComputer = tolower(Computer) | summarize by lowComputer
+ | join (
+ Heartbeat
+ | where SCAgentChannel == "Direct"
+ | extend lowComputer = tolower(Computer) | summarize by lowComputer, ComputerEnvironment
+ ) on lowComputer
+ | summarize count() by ComputerEnvironment | sort by ComputerEnvironment asc
+```
+
+## Late-arriving data
+If you observe high data ingestion reported using `Usage` records, but you don't observe the same results summing `_BilledSize` directly on the data type, it's possible that you have late-arriving data. This is when data is ingested with old timestamps.
+
+For example, an agent may have a connectivity issue and send accumulated data once it reconnects. Or a host may have an incorrect time. This can result in an apparent discrepancy between the ingested data reported by the [Usage](/azure/azure-monitor/reference/tables/usage) data type and a query summing [_BilledSize](./log-standard-columns.md#_billedsize) over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
+
+To diagnose late-arriving data issues, use the [_TimeReceived](./log-standard-columns.md#_timereceived) column in addition to the [TimeGenerated](./log-standard-columns.md#timegenerated) column. `_TimeReceived` is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud.
+
+The following example is in response to high ingested data volumes of [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) data on May 2, 2021 to identify the timestamps on this ingested data. The `where TimeGenerated > datetime(1970-01-01)` statement is included to provide the clue to the Log Analytics user interface to look over all data.
+
+```Kusto
+W3CIISLog
+| where TimeGenerated > datetime(1970-01-01)
+| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
+| where _IsBillable == true
+| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
+| sort by TimeGenerated asc
+```
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
+- See [Ingestion-time transformations in Azure Monitor Logs (preview)](ingestion-time-transformations.md) for details on using ingestion-time transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
azure-monitor Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/change-pricing-tier.md
+
+ Title: Change pricing tier for Log Analytics workspace
+description: Details on how to change pricing tier for Log Analytics workspace in Azure Monitor.
+ Last updated : 03/25/2022+
+
+# Change pricing tier for Log Analytics workspace
+Each Log Analytics workspace in Azure Monitor can have a different [pricing tier](cost-logs.md#commitment-tiers). This article describes how to change the pricing tier for a workspace and how to track these changes.
+
+> [!NOTE]
+> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../usage-estimated-costs.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
+## Azure portal
+Use the following steps to change the pricing tier of your workspace using the Azure portal.
+
+1. From the **Log Analytics workspaces** menu, select your workspace, and open **Usage and estimated costs**. This displays a list of each of the pricing tiers available for this workspace.
+
+2. Review the estimated costs for each pricing tier. This estimate assumes that the last 31 days of your usage is typical. In the example below, based on the data patterns from the previous 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day commitment tier (#2).
+
+
+3. Click **Select** if you decide to change the pricing tier after reviewing the estimated costs.
+
+## Azure Resource Manager
+To set the pricing tier using an [Azure Resource Manager](./resource-manager-workspace.md), use the `sku` object to set the pricing tier and the `capacityReservationLevel` parameter if the pricing tier is `capacityresrvation`. For details on this template format, see [Microsoft.OperationalInsights workspaces](/azure/templates/microsoft.operationalinsights/workspaces)
+
+The following sample template sets a workspace to a 300 GB/day commitment tier. To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the SKU), omit the `capacityReservationLevel` property.
+
+```
+{
+ "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "YourWorkspaceName",
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "location": "yourWorkspaceRegion",
+ "properties": {
+ "sku": {
+ "name": "capacityreservation",
+ "capacityReservationLevel": 300
+ }
+ }
+ }
+ ]
+}
+```
+
+See [Deploying the sample templates](../resource-manager-samples.md) if you're not familiar with using Resource Manager templates.
+++
+## Tracking pricing tier changes
+Changes to a workspace's pricing tier are recorded in the [Activity Log](../essentials/activity-log.md). Filter for events with an **Operation** of *Create Workspace*. The event's **Change history** tab will show the old and new pricing tiers in the `properties.sku.name` row. To monitor changes the pricing tier, [create an alert](../alerts/alerts-activity-log.md) for the *Create Workspace* operation.
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
+
+ Title: Azure Monitor Logs pricing details
+description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation.
++ Last updated : 03/24/2022+
+
+# Azure Monitor Logs pricing details
+The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor do not have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs.
+
+## Pricing model
+The default pricing for Log Analytics is a Pay-As-You-Go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+
+- The set of management solutions enabled and their configuration
+- The number and type of monitored resources
+- Type of data collected from each monitored resource
+
+## Data size calculation
+Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
+
+### Excluded columns
+The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
+
+- `_ResourceId`
+- `_SubscriptionId`
+- `_ItemId`
+- `_IsBillable`
+- `_BilledSize`
+- `Type`
++
+### Excluded tables
+Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), [Operation](/azure/azure-monitor/reference/tables/operation). This will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion.
+++
+### Charges for other solutions and services
+Some solutions have more specific policies about free data ingestion. For example [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. Services such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
+
+See the documentation for different services and solutions for any unique billing calculations.
+
+## Commitment Tiers
+In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+
+- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period.
+- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time.
+
+Billing for the commitment tiers is done on a daily basis. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices.
+
+> [!TIP]
+> The **Usage and estimated costs** menu item for each Log Analytics workspace hows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view.
++
+> [!NOTE]
+> Starting June 2, 2021, **Capacity Reservations** were renamed to **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
+
+## Dedicated clusters
+An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features such as [customer-managed keys](customer-managed-keys.md) and use the same commitment tier pricing model as workspaces although they must have a commitment level of at least 500 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There is no Pay-As-You-Go option for clusters.
+
+The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level.
+
+There are two modes of billing for a cluster that you specify when you create the cluster.
+
+- **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster.
+
+- **Workspaces**: Commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.)<br><br>If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier. The unused part of the commitment tier is then billed to the cluster resource.<br><br>If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource.
+
+In cluster billing options, data retention is billed for each workspace. Cluster billing starts when the cluster is created, regardless of whether workspaces are associated with the cluster.
+
+When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on the cluster's commitment tier. Workspaces associated to a cluster no longer have their own pricing tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
+
+If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+
+See [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) for details on creating a dedicated cluster and specifying its billing type.
+
+## Basic Logs
+You can configure certain tables in a Log Analytics workspace to use [Basic Logs](basic-logs-configure.md). Data in these tables has a significantly reduced ingestion charge and a limited retention period. There is a charge though to query against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts.
+
+See [Configure Basic Logs in Azure Monitor](basic-logs-configure.md) for details on Basic Logs including how to configure them and query their data.
+## Data retention and Archive Logs
+In addition to data ingestion, there is a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived Logs have a reduced retention charge, but there is a charge to restore or search against them. Use Archive Logs to reduce your costs for data that you must store for compliance or occasional investigation.
+
+See [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md) for details on data retention and archiving including how to configure these settings and access archived data.
+## Application insights billing
+Since [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers) in addition to Pay-As-You-Go.
+
+Data ingestion and data retention for a [classic Application Insights resource](../app/create-new-resource.md) follow the same Pay-As-You-Go pricing as workspace-based resources, but they can't leverage commitment tiers.
+
+Telemetry from ping tests and multi-step tests is charged the same as data usage for other telemetry from your app. Use of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. There's no data volume charge for using the [Live Metrics Stream](../app/live-stream.md).
+
+See [Application Insights legacy enterprise (per node) pricing tier](../app/legacy-pricing.md) for details about legacy tiers that are available to early adopters of Application Insights.
+
+## Workspaces with Microsoft Sentinel
+When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collected in that workspace is subject to Sentinel charges in addition to Log Analytics charges. For this reason, you will often separate your security and operational data in different workspaces so that you don't incur [Sentinel charges](../../sentinel/billing.md) for operational data. There may be particular situations though where combining this data can actually result in a cost savings. This is typically when you aren't collecting enough security and operational data to each reach a commitment tier on their own, but the combined data is enough to reach a commitment tier. See **Combining your SOC and non-SOC data** in [Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md#decision-tree) for details and a sample cost calculation.
+## Workspaces with Microsoft Defender for Cloud
+[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+
+- [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent)
+- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
+- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)
+- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)
+- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)
+- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)
+- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)
+- [MaliciousIPCommunication](/azure/azure-monitor/reference/tables/maliciousipcommunication)
+- [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog)
+- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)
+- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/enhanced-security-features-overview.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+
+The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+
+## Legacy pricing tiers
+Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the the following legacy pricing tiers:
+
+- Free Trial
+- Standalone (Per GB)
+- Per Node (OMS)
+
+### Free Trial pricing tier
+Workspaces in the **Free Trial** pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)), and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days. Creating new workspaces in (or moving existing workspaces into) the Free Trial pricing tier is possible until July 1, 2022.
+
+### Standalone pricing tier
+Usage on the **Standalone** pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
+
+### Per Node pricing tier
+The **Per Node** pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Usage is reported on three meters:
+
+- **Node**: this is usage for the number of monitored nodes (VMs) in units of node months.
+- **Data Overage per Node**: this is the number of GB of data ingested in excess of the aggregated data allocation.
+- **Data Included per Node**: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Microsoft Defender for Cloud.
+
+> [!TIP]
+> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluate-the-legacy-per-node-pricing-tier) for a recommendation.
+
+Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it can't be moved back. Data ingestion meters on your Azure bill for these legacy tiers are called "Data analyzed."
++
+### Microsoft Defender for Cloud with legacy pricing tiers
+Following are considerations between legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml).
+
+- If the workspace is in the legacy Standard or Premium tier, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion, not per node.
+- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/).
+- In other pricing tiers (including commitment tiers), if Microsoft Defender for Cloud was enabled before June 19, 2017, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion. Otherwise, Microsoft Defender for Cloud is billed using the current Microsoft Defender for Cloud node-based pricing model.
+
+More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
+
+None of the legacy pricing tiers have regional-based pricing.
+
+> [!NOTE]
+> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
+
+## Evaluate the legacy Per Node pricing tier
+It's often difficult to determine whether workspaces with access to the legacy **Per Node** pricing tier are better off in that tier or in a current **Pay-As-You-Go** or **Commitment Tier**. This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier.
+
+The following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days, and for each day, it evaluates which pricing tier would have been optimal. To use the query, you need to specify:
+
+- Whether the workspace is using Microsoft Defender for Cloud by setting **workspaceHasSecurityCenter** to **true** or **false**.
+- Update the prices if you have specific discounts.
+- Specify the number of days to look back and analyze by setting **daysToEvaluate**. This is useful if the query is taking too long trying to look at seven days of data.
+
+```kusto
+// Set these parameters before running query
+// For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
+// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
+let PerNodePrice = 15.; // Monthly price per monitored node
+let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier
+let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/)
+let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier
+let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier
+let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier
+let CommitmentTier400Price = 704.; // Enter your price for the 400 GB/day commitment tier
+let CommitmentTier500Price = 865.; // Enter your price for the 500 GB/day commitment tier
+let CommitmentTier1000Price = 1700.; // Enter your price for the 1000 GB/day commitment tier
+let CommitmentTier2000Price = 3320.; // Enter your price for the 2000 GB/day commitment tier
+let CommitmentTier5000Price = 8050.; // Enter your price for the 5000 GB/day commitment tier
+//
+let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]);
+let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
+let EndDate = startofday(now());
+union *
+| where TimeGenerated >= StartDate and TimeGenerated < EndDate
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize nodesPerHour = dcount(computerName) by bin(TimeGenerated, 1h)
+| summarize nodesPerDay = sum(nodesPerHour)/24. by day=bin(TimeGenerated, 1d)
+| join kind=leftouter (
+ Heartbeat
+ | where TimeGenerated >= StartDate and TimeGenerated < EndDate
+ | where Computer != ""
+ | summarize ASCnodesPerHour = dcount(Computer) by bin(TimeGenerated, 1h)
+ | extend ASCnodesPerHour = iff(workspaceHasSecurityCenter, ASCnodesPerHour, 0)
+ | summarize ASCnodesPerDay = sum(ASCnodesPerHour)/24. by day=bin(TimeGenerated, 1d)
+) on day
+| join (
+ Usage
+ | where TimeGenerated >= StartDate and TimeGenerated < EndDate
+ | where IsBillable == true
+ | extend NonSecurityData = iff(DataType !in (SecurityDataTypes), Quantity, 0.)
+ | extend SecurityData = iff(DataType in (SecurityDataTypes), Quantity, 0.)
+ | summarize DataGB=sum(Quantity)/1000., NonSecurityDataGB=sum(NonSecurityData)/1000., SecurityDataGB=sum(SecurityData)/1000. by day=bin(StartTime, 1d)
+) on day
+| extend AvgGbPerNode = NonSecurityDataGB / nodesPerDay
+| extend OverageGB = iff(workspaceHasSecurityCenter,
+ max_of(DataGB - 0.5*nodesPerDay - 0.5*ASCnodesPerDay, 0.),
+ max_of(DataGB - 0.5*nodesPerDay, 0.))
+| extend PerNodeDailyCost = nodesPerDay * PerNodePrice / 31. + OverageGB * PerNodeOveragePrice
+| extend billableGB = iff(workspaceHasSecurityCenter,
+ (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)), DataGB )
+| extend PerGBDailyCost = billableGB * PerGBPrice
+| extend CommitmentTier100DailyCost = CommitmentTier100Price + max_of(billableGB - 100, 0.)* CommitmentTier100Price/100.
+| extend CommitmentTier200DailyCost = CommitmentTier200Price + max_of(billableGB - 200, 0.)* CommitmentTier200Price/200.
+| extend CommitmentTier300DailyCost = CommitmentTier300Price + max_of(billableGB - 300, 0.)* CommitmentTier300Price/300.
+| extend CommitmentTier400DailyCost = CommitmentTier400Price + max_of(billableGB - 400, 0.)* CommitmentTier400Price/400.
+| extend CommitmentTier500DailyCost = CommitmentTier500Price + max_of(billableGB - 500, 0.)* CommitmentTier500Price/500.
+| extend CommitmentTier1000DailyCost = CommitmentTier1000Price + max_of(billableGB - 1000, 0.)* CommitmentTier1000Price/1000.
+| extend CommitmentTier2000DailyCost = CommitmentTier2000Price + max_of(billableGB - 2000, 0.)* CommitmentTier2000Price/2000.
+| extend CommitmentTier5000DailyCost = CommitmentTier5000Price + max_of(billableGB - 5000, 0.)* CommitmentTier5000Price/5000.
+| extend MinCost = min_of(
+ PerNodeDailyCost,PerGBDailyCost,CommitmentTier100DailyCost,CommitmentTier200DailyCost,
+ CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost)
+| extend Recommendation = case(
+ MinCost == PerNodeDailyCost, "Per node tier",
+ MinCost == PerGBDailyCost, "Pay-as-you-go tier",
+ MinCost == CommitmentTier100DailyCost, "Commitment tier (100 GB/day)",
+ MinCost == CommitmentTier200DailyCost, "Commitment tier (200 GB/day)",
+ MinCost == CommitmentTier300DailyCost, "Commitment tier (300 GB/day)",
+ MinCost == CommitmentTier400DailyCost, "Commitment tier (400 GB/day)",
+ MinCost == CommitmentTier500DailyCost, "Commitment tier (500 GB/day)",
+ MinCost == CommitmentTier1000DailyCost, "Commitment tier (1000 GB/day)",
+ MinCost == CommitmentTier2000DailyCost, "Commitment tier (2000 GB/day)",
+ MinCost == CommitmentTier5000DailyCost, "Commitment tier (5000 GB/day)",
+ "Error"
+)
+| project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost,
+ CommitmentTier100DailyCost, CommitmentTier200DailyCost, CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost, Recommendation
+| sort by day asc
+//| project day, Recommendation // Comment this line to see details
+| sort by day asc
+```
+
+This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases.
+
+> [!NOTE]
+> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
++
+## Next steps
+- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that may be ingested in a workspace each day.
+- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
Last updated 10/20/2021
> This article describes how to parse text data in a Log Analytics workspace as it's collected. We recommend parsing text data in a query filter after it's collected following the guidance described in [Parse text data in Azure Monitor](./parse-text.md). It provides several advantages over using custom fields. > [!IMPORTANT]
-> Custom fields increases the amount of data collected in the Log Analytics workspace which can increase your cost. See [Manage usage and costs with Azure Monitor Logs](./manage-cost-storage.md#pricing-model) for details.
+> Custom fields increases the amount of data collected in the Log Analytics workspace which can increase your cost. See [Azure Monitor Logs pricing details](cost-logs.md) for details.
The **Custom Fields** feature of Azure Monitor allows you to extend existing records in your Log Analytics workspace by adding your own searchable fields. Custom fields are automatically populated from data extracted from other properties in the same record.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-Managed key is provided on dedicated cluster and these operations are r
- 409 ΓÇö Workspace link or unlink operation in process. ## Next steps -- Learn about [Log Analytics dedicated cluster billing](./manage-cost-storage.md#log-analytics-dedicated-clusters)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)
- Learn about [proper design of Log Analytics workspaces](./design-logs-deployment.md)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
+
+ Title: Set daily cap on Log Analytics workspace
+description: Set a
++ Last updated : 03/28/2022+
+
+# Set daily cap on Log Analytics workspace
+A daily cap on a Log Analytics workspace allows you to avoid unexpected increases in charges for data ingestion by stopping collection of billable data for the rest of the day whenever a specified threshold is reached. This article describes how the daily cap works and how to configure one in your workspace.
+
+> [!IMPORTANT]
+> You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected.
++
+## How the daily cap works
+Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
+
+Data collection resumes at the reset time which is a different hour of the day for each workspace. This reset hour can't be configured. You can optionally create an alert rule to send an alert when this event is created.
+
+> [!NOTE]
+> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
+
+## Application Insights
+You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace.
+
+> [!TIP]
+> If you're concerned about the amount of billable data collected by Application Insights, you should configure [sampling](../app/sampling.md) to tune its data volume to the level you want. Use the daily cap as a safety method in case your application unexpectedly begins to send much higher volumes of telemetry.
+
+The maximum cap for an Application Insights classic resource is 1,000 GB/day unless you request a higher maximum for a high-traffic application. When you create a resource in the Azure portal, the daily cap is set to 100 GB/day. When you create a resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production.
+
+We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
++
+## Determine your daily cap
+To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
+++
+## Workspaces with Microsoft Defender for Cloud
+For workspaces with [Microsoft Defender for Cloud](../../security-center/index.yml), the daily cap doesn't stop the collection of the following data types except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017. :
+
+- WindowsEvent
+- SecurityAlert
+- SecurityBaseline
+- SecurityBaselineSummary
+- SecurityDetection
+- SecurityEvent
+- WindowsFirewall
+- MaliciousIPCommunication
+- LinuxAuditLog
+- SysmonEvent
+- ProtectionStatus
+- Update
+- UpdateSummary
++
+## Set the daily cap
+### Log Analytics workspace
+To set or change the daily cap for a Log Analytics workspace in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select your workspace, and then **Usage and estimated costs**.
+2. Select **Data Cap** at the top of the page.
+3. Select **ON** and then set the data volume limit in GB/day.
++
+> [!NOTE]
+> The reset hour for the workspace is displayed but cannot be configured.
+
+To configure the daily cap with Azure Resource Manager, set the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
++
+### Classic Applications Insights resource
+To set or change the daily cap for a classic Application Insights resource in the Azure portal:
+
+1. From the **Monitor** menu, select **Applications**, your application, and then **Usage and estimated costs**.
+2. Select **Data Cap** at the top of the page.
+3. Set the data volume limit in GB/day.
+4. If you want an email sent to the subscription administrator when the daily limit is reached, then select that option.
+3. Set the daily cap warning level in percentage of the data volume limit.
+4. If you want an email sent to the subscription administrator when the daily cap warning level is reached, then select that option.
++
+To configure the daily cap with Azure Resource Manager, set the `dailyQuota`, `dailyQuotaResetTime` and `warningThreshold` parameters as described at [Workspaces - Create Or Update](../app/powershell.md#set-the-daily-cap).
++
+## Alert when daily cap is reached
+When the daily cap is reached for a Log Analytics workspace, a banner is displayed in the Azure portal, and an event is written to the **Operations** table in the workspace. You should create an alert rule to proactively notify you when this occurs.
+
+To receive an alert when the daily cap is reached, create a [log alert rule](../alerts/alerts-unified-log.md) with the following details.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Signal type | Log |
+| Signal name | Custom log search |
+| Query | `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"` |
+| Measurement | Measure: *Table rows*<br>Aggregation type: Count<br>Aggregation granularity: 5 minutes |
+| Alert Logic | Operator: Greater than<br>Threshold value: 0<br>Frequency of evaluation: 5 minutes |
+| Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Daily data limit reached |
++
+### Classic Application Insights resource
+When the daily cap is reach for a classic Application Insights resource, an event is created in the Azure Activity log with the following signal names. You can also optionally have an email sent to the subscription administrator both when the cap is reached and when a specified percentage of the daily cap has been reached.
+
+* Application Insights component daily cap warning threshold reached
+* Application Insights component daily cap reached
+
+To create an alert when the daily cap is reached, create an [Activity log alert rule](../alerts/alerts-activity-log.md#azure-portal) with the following details.
++
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your application. |
+| **Condition** | |
+| Signal type | Activity Log |
+| Signal name | Application Insights component daily cap reached<br>Or<br>Application Insights component daily cap warning threshold reached |
+| Severity| Warning |
+| Alert rule name | Daily data limit reached |
+++
+## View the effect of the daily cap
+The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value this for your workspace.
+
+```kusto
+let DailyCapResetHour=14;
+Usage
+| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
+| where TimeGenerated > ago(32d)
+| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
+| where StartTime > startofday(ago(31d))
+| where IsBillable
+| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
+| render areachart
+```
+Add `Update` and `UpdateSummary` data types to the `where Datatype` line when the Update Management solution is not running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).)
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
azure-monitor Data Collection Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-troubleshoot.md
+
+ Title: Troubleshoot why data is no longer being collected in Azure Monitor
+description: Steps to take if data is no longer being collected in Log Analytics workspace in Azure Monitor.
+ Last updated : 03/31/2022+
+
+# Troubleshoot why data is no longer being collected in Azure Monitor
+This article provides guidance to detect when data collection in Azure Monitor stops and steps you can take to determine and correct the causes.
++
+## Data collection status
+When data collection in a Log Analytics workspace stops, an event with a type of **Operation** is created in the workspace. Run the following query to check whether you're reaching the daily limit and missing data:
+
+```kusto
+Operation | where OperationCategory == 'Data Collection Status'
+```
+
+When data collection stops, the **OperationStatus** is **Warning**. When data collection starts, the **OperationStatus** is **Succeeded**.
+
+To be notified when data collection stops, use the steps described in the [Alert when daily cap is reached](daily-cap.md#alert-when-daily-cap-is-reached) section. To configure an e-mail, webhook, or runbook action for the alert rule, use the steps described in [create an action group](../alerts/action-groups.md).
+
+## Daily cap was reached
+The [daily cap](daily-cap.md) limits the amount of data that a Log Analytics workspace can collect in a day. If the daily cap is reached, then data collection will stop until the reset time. Either wait for collection to automatically restart, or increase the daily data volume limit.
++
+## Legacy free pricing tier
+If your Log Analytics workspace is on the [legacy Free pricing tier](cost-logs.md#legacy-pricing-tiers) and has collected more than 500 MB of data in a day, data collection stops for the rest of the day. Wait until the following day for collection to automatically restart, or change to a paid pricing tier.
++
+## Workspace reached the data ingestion volume rate
+The [default ingestion volume rate limit](../service-limits.md#log-analytics-workspaces) for data sent from Azure resources using diagnostic settings is approximately 6 GB/min per workspace. This is an approximate value because the actual size can vary between data types, depending on the log length and its compression ratio. This limit doesn't apply to data that's sent from agents or the [Data Collector API](data-collector-api.md).
+
+If you send data at a higher rate to a single workspace, some data is dropped, and an event is sent to the **Operation** table in your workspace every 6 hours while the threshold continues to be exceeded. If your ingestion volume continues to exceed the rate limit or you are expecting to reach it sometime soon, you can request an increase to your workspace by sending an email to LAIngestionRate@microsoft.com or by opening a support request.
+
+Use the following query to retrieve the record that indicates the data ingestion rate limit was reached.
+
+```kusto
+Operation
+| where OperationCategory == "Ingestion"
+| where Detail startswith "The rate of data crossed the threshold"
+```
+
+## Azure subscription is in a suspended state
+You Azure subscription could be in a suspended state for one of the following reasons:
+
+- Free trial ended
+- Azure pass expired
+- Monthly spending limit reached (such as on an MSDN or Visual Studio subscription)
++
+## Limits summary
+
+There are additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. These are documented at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
++
+## Next steps
+
+- See [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
azure-monitor Data Ingestion From File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-from-file.md
+
+ Title: Ingest data from a file using Data Collection Rules (DCR)
+description: Learn how to ingest data from a file into a Log Analytics workspace from files using DCR.
+++ Last updated : 03/21/2022++
+# Customer intent: As a DevOps specialist, I want to ingest external data from a file into a workspace.
+
+# Collect and ingest data from a file using Data Collection Rules (DCR) (Preview)
+
+If you want to collect, log files from your systems using agents, you can use a Data Collection Rules.
+
+You can define how Azure Monitor transforms and stores data ingested into your workspace by setting [Data Collection Rules (DCR)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview). Using DCR lets you ingest data quickly from different log formats.
+
+This tutorial explains how to ingest data from a file into a Log Analytics workspace using DCR.
+
+>[!NOTE]
+> * To use this method, you need to make use of MMA agent. We recommend using AMA that has more native integration with Custom Logs v2 (currently in preview)
+> * Use [Custom Logs v2](https://docs.microsoft.com/azure/azure-monitor/logs/custom-logs-overview) that allows transformations and exports
+
+## Prerequisites
+
+To complete this tutorial, you need a [Log Analytics workspace](quick-create-workspace.md).
+
+## Create a custom log table
+
+>[!TIP]
+> * If you already have a custom log table, you can skip this step and go and set a DCR.
+
+Before you can send data to the workspace, you need to create the custom table that the data will be sent to:
+
+1. Go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace.
+1. Select **Custom Log** > **Add custom log**.
+1. Upload a sample log file.
+1. Select a record delimiter.
+1. Set a collection path:
+ 1. Select Windows or Linux to specify which path format you're adding.
+ 1. Set the path on to the custom log file on your machine.
+1. Specify a name for the table. Azure Monitor automatically adds the *_CL* (custom log) suffix to the table name.
+1. Select **Create**.
+## Create a Data Collection Rule (DCR)
+1. Make sure the name of the stream is `Custom-{TableName}`.
+
+ For example:
+
+ ```json
+ {
+ "properties": {
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/<SubscriptionID/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<DCRName>",
+ "workspaceId": "WorkspaceID",
+ "name": "MyLogFolder"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-DataPullerE2E_CL"
+ ],
+ "destinations": [
+ "MyLogFolder"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-DataPullerE2E_CL"
+ }
+ ]
+ },
+ "location": "eastus2euap",
+ "id": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>",
+ "name": "<DCRName>",
+ "type": "Microsoft.Insights/dataCollectionRules"
+ }
+ ```
+
+1. Set the Data Collection Rule to be the default on the workspace. Use the following API command:
+
+ ```json
+ PUT https://management.azure.com/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>?api-version=2015-11-01-preview
+ {
+ "properties": {
+ "defaultDataCollectionRuleResourceId": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>"
+ },
+ "location": "eastus2euap",
+ "type": "Microsoft.OperationalInsights/workspaces"
+ }
+ ```
+
+1. Set the table as file-based custom log ingestion via DCR eligible, use the Custom log definition API.
+
+ 1. First run the following Get command:
+
+ ```json
+ GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/MyLogFolder/logsettings/customlogs/definitions/DataPullerE2E_CL?api-version=2020-08-01
+ ```
+
+ 1. Copy the response and send a PUT request:
+
+ ```JSON
+ {
+ "Name": "DataPullerE2E_CL",
+ "Description": "custom log to test Data puller E2E",
+ "Inputs": [
+ {
+ "Location": {
+ "FileSystemLocations": {
+ "WindowsFileTypeLogPaths": [
+ "C:\\MyLogFolder\\*.txt",
+ "C:\\MyLogFolder\\MyLogFolder.txt"
+ ]
+ }
+ },
+ "RecordDelimiter": {
+ "RegexDelimiter": {
+ "Pattern": \\n,
+ "MatchIndex": 0,
+ "NumberedGroup": null
+ }
+ }
+ }
+ ],
+ "Properties": [
+ {
+ "Name": "TimeGenerated",
+ "Type": "DateTime",
+ "Extraction": {
+ "DateTimeExtraction": {}
+ }
+ }
+ ],
+ "SetDataCollectionRuleBased": true
+ }
+ ```
+
+ >[!Note]
+ > * The `SetDataCollectionRuleBased` flag, from the last API command, enables the table as data puller.
+ > * Once you switch to DCREnabled mode, data will stop flowing unless you have DCR configured.
+
+ * To validate that the value is updated, run:
+ ```json
+ GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/microsoft.operationalinsights/workspaces/MyLogFolder/datasources?api-version=2020-08-01&$filter=(kind%20eq%20'CustomLog')
+ ```
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
You can also purge data from a workspace using the [purge feature](personal-data
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.** ## Tables with unique retention policies
-By default, the tables of two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. Increasing the workspace retention policy to more than 90 days also increases the retention policy of these tables. These tables are also free from data ingestion charges.
+By default, two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These tables are also free from data ingestion charges.
Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually.
You'll be charged for each day you retain data. The cost of retaining data for p
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+## Classic Application Insights resources
+Data workspace-based Application Insights resources is stored in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. Classic Application Insights resources though, have separate retention settings.
+
+The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days.
+
+To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option:
+
+![Screenshot that shows where to change the data retention period.](../app/media/pricing/pricing-005.png)
+
+A several-day grace period begins when the retention is lowered before the oldest data is removed.
+
+The retention can also be [set programatically using PowerShell](../app/powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
+ ## Next steps - [Learn more about Log Analytics workspaces and data retention and archive.](log-analytics-workspace-overview.md) - [Create a search job to retrieve archive data matching particular criteria.](search-jobs.md)
azure-monitor Design Logs Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/design-logs-deployment.md
A Log Analytics workspace provides:
* A geographic location for data storage. * Data isolation by granting different users access rights following one of our recommended design strategies.
-* Scope for configuration of settings like [pricing tier](./manage-cost-storage.md#changing-pricing-tier), [retention](./manage-cost-storage.md#change-the-data-retention-period), and [data capping](./manage-cost-storage.md#manage-your-maximum-daily-data-volume).
+* Scope for configuration of settings like [pricing tier](cost-logs.md#commitment-tiers), [retention](data-retention-archive.md), and [data capping](daily-cap.md).
Workspaces are hosted on physical clusters. By default, the system is creating and managing these clusters. Customers that ingest more than 4TB/day are expected to create their own dedicated clusters for their workspaces - it enables them better control and higher ingestion rate.
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Each workspace contains multiple tables that are organized into separate columns
## Cost There is no direct cost for creating or maintaining a workspace. You're charged for the data sent to it (data ingestion) and how long that data is stored (data retention). These costs may vary based on the data plan of each table as described in [Log data plans (preview)](#log-data-plans-preview).
-See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Manage usage and costs with Azure Monitor Logs](manage-cost-storage.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
+See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Azure Monitor best practices - Cost management](../best-practices-cost.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
## Log data plans (preview) By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-standard-columns.md
union withsource = tt *
``` ## \_BilledSize
-The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true. [Learn more](manage-cost-storage.md#data-size) about the details of how the billed size is calculated.
+The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true. See [Data size calculation](cost-logs.md#data-size-calculation) to learn more about the details of how the billed size is calculated.
### Examples
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
All operations on the cluster level require the `Microsoft.OperationalInsights/c
## Cluster pricing model-
-Log Analytics Dedicated Clusters use a Commitment Tier (formerly called capacity reservations) pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-
-The cluster Commitment Tier level is configured programmatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day.
-
-There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when configuring your cluster.
-
-1. **Cluster (default)**--Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
-
-2. **Workspaces**--The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) Details of pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-
-If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
-
-When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on cluster's Commitment Tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
-
-Complete details are billing for Log Analytics dedicated clusters are available [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters.
## Create a dedicated cluster
You must specify the following properties when you create a new dedicated cluste
- **ClusterName** - **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md). - **Location**-- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
After you create your cluster resource and it's fully provisioned, you can edit
- **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values: - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
- - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
+ - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT]
Authorization: Bearer <token>
## Next steps -- Learn about [Log Analytics dedicated cluster billing](./manage-cost-storage.md#log-analytics-dedicated-clusters)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)
- Learn about [proper design of Log Analytics workspaces](../logs/design-logs-deployment.md)
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
- Title: Manage usage and costs for Azure Monitor Logs
-description: Learn how to change the pricing plan and manage data volume and retention policy for your Log Analytics workspace in Azure Monitor.
------ Previously updated : 03/05/2022---
-
-# Manage usage and costs with Azure Monitor Logs
-
-> [!NOTE]
-> This article describes how to understand and control your costs for Azure Monitor Logs. A related article, [Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes how to view usage and estimated costs across multiple Azure monitoring features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill). All prices and costs in this article are for example purposes only.
-
-Azure Monitor Logs is designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. Although this might be a primary driver for your organization, cost-efficiency is ultimately the underlying driver. To that end, it's important to understand that the cost of a Log Analytics workspace isn't based only on the volume of data collected; it's also dependent on the selected plan, and how long you stored data generated from your connected sources.
-
-This article reviews how you can proactively monitor ingested data volume and storage growth. It also discusses how to define limits to control those associated costs.
-
-## Pricing model
-
-The default pricing for Log Analytics is a **Pay-As-You-Go** model that's based on ingested data volume and, optionally, for longer data retention. Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
-
- - The set of management solutions enabled and their configuration
- - The number and type of monitored resources
- - Type of data collected from each monitored resource
-
-<a name="commitment-tier"></a>
-### Commitment Tiers
-In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With the commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
--- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period. -- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time.
-
-Billing for the commitment tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Commitment Tier pricing.
-
-> [!NOTE]
-> Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
-
-### Data size calculation
-
-<a name="data-size"></a>
-<a name="free-data-types"></a>
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
-
-Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
-
-### Log Analytics Dedicated Clusters
-
-[Log Analytics Dedicated Clusters](logs-dedicated-clusters.md) are collections of workspaces in a single managed Azure Data Explorer cluster to support advanced scenarios, like [Customer-Managed Keys](customer-managed-keys.md). Log Analytics Dedicated Clusters use the same commitment tier pricing model as workspaces, except that a cluster must have a commitment level of at least 500 GB/day. There is no Pay-As-You-Go option for clusters. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. Learn more about [creating a Log Analytics Clusters](customer-managed-keys.md#create-cluster) and [associating workspaces to it](customer-managed-keys.md#link-workspace-to-cluster). For information about commitment tier pricing, see the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-
-The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
-
-There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when [creating a cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) or set after creation. The two modes are:
--- **Cluster**: in this case (which is the default), billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. --- **Workspaces**: the commitment tier costs for your cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) If the total data volume ingested into a cluster for a day is less than the commitment tier, each workspace is billed for its ingested data at the effective per-GB commitment tier rate by billing them a fraction of the commitment tier, and the unused part of the commitment tier is billed to the cluster resource. If the total data volume ingested into a cluster for a day is more than the commitment tier, each workspace is billed for a fraction of the commitment tier, based on its fraction of the ingested data that day and each workspace for a fraction of the ingested data above the commitment tier. If the total data volume ingested into a workspace for a day is above the commitment tier, nothing is billed to the cluster resource.-
-In cluster billing options, data retention is billed for each workspace. Cluster billing starts when the cluster is created, regardless of whether workspaces are associated with the cluster. Workspaces associated to a cluster no longer have their own pricing tier.
-
-## Estimating the costs to manage your environment
-
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. In the **Search** box, enter "Azure Monitor", and then select the resulting Azure Monitor tile. Scroll down the page to **Azure Monitor**, and then expand the **Log Analytics** section. Here you can enter the GB of data that you expect to collect. If you're already evaluating Azure Monitor Logs, you can use data statistics from your own environment. See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume). If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
-
-1. **Monitoring VMs:** with typical monitoring enabled, 1 GB to 3 GB of data month is ingested per monitored VM.
-2. **Monitoring Azure Kubernetes Service (AKS) clusters:** details on expected data volumes for monitoring a typical AKS cluster are available [here](../containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster). Follow these [best practices](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to control your AKS cluster monitoring costs.
-3. **Application monitoring:** the Azure Monitor pricing calculator includes a data volume estimator using on your application's usage and based on a statistical analysis of Application Insights data volumes. In the Application Insights section of the pricing calculator, toggle the switch next to "Estimate data volume based on application activity" to use this.
-
-## Viewing Log Analytics usage on your Azure bill
-
-The easiest way to view your billed usage for a particular Log Analytics workspace is to go to the **Overview** page of the workspace and click **View Cost** in the upper right corner of the Essentials section at the top of the page. This will launch the Cost Analysis from Azure Cost Management + Billing already scoped to this workspace. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md))
-
-Alternatively, you can start in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. here you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Microsoft Defender for Cloud and Microsoft Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
-
-<a name="export-usage"></a>
-<a name="download-usage"></a>
-
-To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md). For step-by-step instructions, review this [tutorial](../../cost-management-billing/costs/tutorial-export-acm-data.md).
-In the downloaded spreadsheet, you can see usage per Azure resource (for example, Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers), and "Azure Monitor" (used by commitment tier pricing tiers), and then adding a filter on the "Instance ID" column that is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column, and the unit for each entry is shown in the "Unit of Measure" column. For more information, see [Review your individual Azure subscription bill](../../cost-management-billing/understand/review-individual-bill.md).
-
-## Understand your usage and optimizing your pricing tier
-<a name="understand-your-usage-and-estimate-costs"></a>
-
-To learn about your usage trends and choose the most cost-effective log Analytics pricing tier, use **Log Analytics Usage and Estimated Costs**. This shows how much data is collected by each solution, how much data is being retained, and an estimate of your costs for each pricing tier based on recent data ingestion patterns.
--
-To explore your data in more detail, select on the icon in the upper-right corner of either chart on the **Usage and Estimated Costs** page. Now you can work with this query to explore more details of your usage.
--
-From the **Usage and Estimated Costs** page, you can review your data volume for the month. This includes all the billable data received and retained in your Log Analytics workspace.
-
-Log Analytics charges are added to your Azure bill. You can see details of your Azure bill under the **Billing** section of the Azure portal or in the [Azure Billing Portal](https://account.windowsazure.com/Subscriptions).
-
-## Changing pricing tier
-
-To change the Log Analytics pricing tier of your workspace:
-
-1. In the Azure portal, open **Usage and estimated costs** from your workspace; you'll see a list of each of the pricing tiers available to this workspace.
-
-2. Review the estimated costs for each pricing tier. This estimate is based on the last 31 days of usage, so this cost estimate relies on the last 31 days being representative of your typical usage. In the example below, you can see how, based on the data patterns from the last 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day commitment tier (#2).
-
-
-3. After reviewing the estimated costs based on the last 31 days of usage, if you decide to change the pricing tier, select **Select**.
-
-### Changing pricing tier via ARM
-
-You can also [set the pricing tier via Azure Resource Manager](./resource-manager-workspace.md) using the `sku` object to set the pricing tier, and the `capacityReservationLevel` parameter if the pricing tier is `capacityresrvation`. (Learn more about [setting workspace properties via ARM](/azure/templates/microsoft.operationalinsights/2020-08-01/workspaces?tabs=json#workspacesku-object).) Here is a sample Azure Resource Manager template to set your workspace to a 300 GB/day commitment tier (in Resource Manager, it's called `capacityreservation`).
-
-```
-{
- "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "name": "YourWorkspaceName",
- "type": "Microsoft.OperationalInsights/workspaces",
- "apiVersion": "2020-08-01",
- "location": "yourWorkspaceRegion",
- "properties": {
- "sku": {
- "name": "capacityreservation",
- "capacityReservationLevel": 300
- }
- }
- }
- ]
-}
-```
-
-To use this template in PowerShell, after [installing the Azure Az PowerShell module](/powershell/azure/install-az-ps), sign in to Azure using `Connect-AzAccount`, select the subscription containing your workspace using `Select-AzSubscription -SubscriptionId YourSubscriptionId`, and apply the template (saved in a file named template.json):
-
-```
-New-AzResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "template.json"
-```
-
-To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the SKU), omit the `capacityReservationLevel` property. Learn more about [creating ARM templates](../../azure-resource-manager/templates/template-tutorial-create-first-template.md), [adding a resource to your template](../../azure-resource-manager/templates/template-tutorial-add-resource.md), and [applying templates](../resource-manager-samples.md).
-
-### Tracking pricing tier changes
-
-Changes to a workspace's pricing pier are recorded in the [Activity Log](../essentials/activity-log.md) with an event with the Operation named "Create Workspace". The event's **Change history** tab will show the old and new pricing tiers in the `properties.sku.name` row. Click the "Activity Log" option from your workspace to see events scoped to a particular workspace. To monitor changes the pricing tier, you can create an alert for the "Create Workspace" operation.
-
-## Legacy pricing tiers
-
-Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days. Creating new workspaces in (or moving existing workspaces into) the Free Trial pricing tier is possible until July 1, 2022.
-
-Usage on the Standalone pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
-
-The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Usage is reported on three meters:
--- **Node**: this is usage for the number of monitored nodes (VMs) in units of node months.-- **Data Overage per Node**: this is the number of GB of data ingested in excess of the aggregated data allocation.-- **Data Included per Node**: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Microsoft Defender for Cloud.-
-> [!TIP]
-> If your workspace has access to the **Per Node** pricing tier but you're wondering whether it would cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluating-the-legacy-per-node-pricing-tier) to easily get a recommendation.
-
-Workspaces created before April 2016 can continue to use the **Standard** and **Premium** pricing tiers that have fixed data retention of 30 days and 365 days, respectively. New workspaces can't be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it can't be moved back. Data ingestion meters on your Azure bill for these legacy tiers are called "Data analyzed."
-
-### Legacy pricing tiers and Microsoft Defender for Cloud
-
-There are also some behaviors between the use of legacy Log Analytics tiers and how usage is billed for [Microsoft Defender for Cloud](../../security-center/index.yml).
--- If the workspace is in the legacy Standard or Premium tier, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion, not per node.-- If the workspace is in the legacy Per Node tier, Microsoft Defender for Cloud is billed using the current [Microsoft Defender for Cloud node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/). -- In other pricing tiers (including commitment tiers), if Microsoft Defender for Cloud was enabled before June 19, 2017, Microsoft Defender for Cloud is billed only for Log Analytics data ingestion. Otherwise, Microsoft Defender for Cloud is billed using the current Microsoft Defender for Cloud node-based pricing model.-
-More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
-
-None of the legacy pricing tiers have regional-based pricing.
-
-> [!NOTE]
-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-
-## Log Analytics and Microsoft Defender for Cloud
-<a name="ASC"></a>
-
-[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Servers [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-
-To view the daily Defender for Servers data allocations for a workspace, you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill), open the usage spreadsheet and filter the meter category to "Insight and Analytics". You'll then see usage with the meter name "Data Included per Node" which has a zero price per GB. The consumed quantity column will show the number of GBs of Defender for Cloud data allocation for the day. (If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.)
-
-## Change the data retention period
-
-The following steps describe how to configure how long log data is kept by in your workspace. Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they're using the legacy Free Trial pricing tier. Retention for individual data types can be set as low as 4 days. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention. To retain data longer than 730 days, consider using [Log Analytics workspace data export](logs-data-export.md).
-
-### Workspace level default retention
-
-To set the default retention for your workspace:
-
-1. In the Azure portal, from your workspace, select **Usage and estimated costs** in the left pane.
-2. On the **Usage and estimated costs** page, select **Data Retention** at the top of the page.
-3. On the pane, move the slider to increase or decrease the number of days, and then select **OK**. If you're on the *free* tier, you can't modify the data retention period; you need to upgrade to the paid tier to control this setting.
--
-When the retention is lowered, there's a grace period of several days before the data older than the new retention setting is removed.
-
-The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. If another setting is required, that can be configured using [Azure Resource Manager](./resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the grace period). This might be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
-
-Workspaces with 30 days retention might actually retain data for 31 days. If it's imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
-
-By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These data types are also free from data ingestion charges.
-
-Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
-
-> [!TIP]
-> The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. **To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types.** Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
-
-### Retention by data type
-
-It's also possible to specify different retention settings for individual data types from 4 to 730 days (except for workspaces in the legacy Free Trial pricing tier) that override the workspace-level default retention. Each data type is a sub-resource of the workspace. For example, the SecurityEvent table can be addressed in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
-
-```
-/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
-```
-
-Note that the data type (table) is case-sensitive. To get the current per-data-type retention settings of a particular data type (in this example SecurityEvent), use:
-
-```JSON
- GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
-```
-
-> [!NOTE]
-> Retention is only returned for a data type if the retention is explicitly set for it. Data types that don't have retention explicitly set (and thus inherit the workspace retention) don't return anything from this call.
-
-To get the current per-data-type retention settings for all data types in your workspace that have had their per-data-type retention set, just omit the specific data type, for example:
-
-```JSON
- GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2017-04-26-preview
-```
-
-To set the retention of a particular data type (in this example SecurityEvent) to 730 days, use:
-
-```JSON
- PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
- {
- "properties":
- {
- "retentionInDays": 730
- }
- }
-```
-
-Valid values for `retentionInDays` are from 4 through 730.
-
-A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
-
-```
-armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"
-```
-
-> [!TIP]
-> Setting retention on individual data types can be used to reduce your costs for data retention. For data collected starting in October 2019 (when this feature was released), reducing the retention for some data types can reduce your retention cost over time. For data collected earlier, setting a lower retention for an individual type won't affect your retention costs.
-
-## Manage your maximum daily data volume
-
-You can configure a daily cap and limit the daily ingestion for your workspace, but use care because your goal shouldn't be to hit the daily limit. Otherwise, you lose data for the remainder of the day, which can impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. As a result, your ability to observe and receive alerts when the health conditions of resources supporting IT services are impacted. The daily cap is intended to be used as a way to manage an **unexpected increase** in data volume from your managed resources and stay within your limit, or when you want to limit unplanned charges for your workspace. It's not appropriate to set a daily cap so that it's met each day on a workspace.
-
-Each workspace has its daily cap applied on a different hour of the day. The reset hour is shown in the **Daily Cap** page (see below). This reset hour can't be configured.
-
-Soon after the daily limit is reached, the collection of billable data types stops for the rest of the day. Latency inherent in applying the daily cap means that the cap isn't applied at precisely the specified daily cap level. A warning banner appears across the top of the page for the selected Log Analytics workspace, and an operation event is sent to the *Operation* table under the **LogManagement** category. Data collection resumes after the reset time defined under *Daily limit will be set at*. We recommend defining an alert rule that's based on this operation event, configured to notify when the daily data limit is reached. For more information, see [Alert when daily cap is reached](#alert-when-daily-cap-is-reached) section.
-
-> [!NOTE]
-> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. For a query that is helpful in studying the daily cap behavior, see the [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) section in this article.
-
-> [!WARNING]
-> For workspaces with Microsoft Defender for Cloud, the daily cap doesn't stop the collection of data types **WindowsEvent**, **SecurityAlert**, **SecurityBaseline**, **SecurityBaselineSummary**, **SecurityDetection**, **SecurityEvent**, **WindowsFirewall**, **MaliciousIPCommunication**, **LinuxAuditLog**, **SysmonEvent**, **ProtectionStatus**, **Update**, and **UpdateSummary**, except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017.
-
-### Identify what daily data limit to define
-
-To understand the data ingestion trend and the daily volume cap to define, review [Log Analytics Usage and estimated costs](../usage-estimated-costs.md). Consider it with care, because you can't monitor your resources after the limit is reached.
-
-### Set the daily cap
-
-The following steps describe how to configure a limit to manage the volume of data that Log Analytics workspace will ingest per day.
-
-1. From your workspace, select **Usage and estimated costs** in the left pane.
-2. On the **Usage and estimated costs** page for the selected workspace, select **Data Cap** at the top of the page.
-3. By default, **Daily cap** is set to **OFF**. To enable it, select **ON**, and then set the data volume limit in GB/day.
--
-You can use Azure Resource Manager to configure the daily cap. To configure it, set the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
-
-You can track changes made to the daily cap using this query:
-
-```kusto
-_LogOperation | where Operation == "Workspace Configuration" | where Detail contains "Daily quota"
-```
-
-Learn more about the [_LogOperation](./monitor-workspace.md) function.
-
-### View the effect of the daily cap
-
-To view the effect of the daily cap, it's important to account for the security data types that aren't included in the daily cap, and the reset hour for your workspace. The daily cap reset hour is visible on the **Daily Cap** page. The following query can be used to track the data volumes that are subject to the daily cap between daily cap resets. In this example, the workspace's reset hour is 14:00. You'll need to update this for your workspace.
-
-```kusto
-let DailyCapResetHour=14;
-Usage
-| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| where TimeGenerated > ago(32d)
-| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
-| where StartTime > startofday(ago(31d))
-| where IsBillable
-| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
-| render areachart
-```
-Add `Update` and `UpdateSummary` data types to the `where Datatype` line when the Update Management solution is not running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).)
-
-### Alert when daily cap is reached
-
-Azure portal has a visual cue when your data limit threshold is met, but this behavior doesn't necessarily align to how you manage operational issues that require immediate attention. To receive an alert notification, you can create a new alert rule in Azure Monitor. To learn more, see [how to create, view, and manage alerts](../alerts/alerts-metric.md).
-
-To get you started, here are the recommended settings for the alert querying the `Operation` table using the `_LogOperation` function ([learn more](./monitor-workspace.md)).
--- Target: Select your Log Analytics resource-- Criteria:
- - Signal name: Custom log search
- - Search query: `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"`
- - Based on: Number of results
- - Condition: Greater than
- - Threshold: 0
- - Period: 5 (minutes)
- - Frequency: 5 (minutes)
-- Alert rule name: Daily data limit reached-- Severity: Warning (Sev 1)-
-After an alert is defined and the limit is reached, an alert is triggered and performs the response defined in the Action Group. It can notify your team in the following ways:
--- Email and text messages-- Automated actions using webhooks-- Azure Automation runbooks-- [Integrated with an external ITSM solution](../alerts/itsmc-definition.md#create-itsm-work-items-from-azure-alerts). -
-## Investigate your Log Analytics usage
-<a name="troubleshooting-why-usage-is-higher-than-expected"></a>
-
-Higher usage is caused by one, or both, of the following:
-- More nodes than expected sending data to Log Analytics workspace. For information, see the [Understanding nodes sending data](#understanding-nodes-sending-data) section of this article.-- More data than expected being sent to Log Analytics workspace (perhaps due to starting to use a new solution or a configuration change to an existing solution). For information, see the [Understanding ingested data volume](#understanding-ingested-data-volume) section of this article.-
-If you observe high data ingestion reported using the `Usage` records (see the [Data volume by solution](#data-volume-by-solution) section), but you don't observe the same results summing `_BilledSize` directly on the [data type](#data-volume-for-specific-events), it's possible that you have significant late-arriving data. For information about how to diagnose this, see the [Late arriving data](#late-arriving-data) section of this article.
-
-### Log Analytics Workspace Insights
-
-Start understanding your data volumes in the **Usage** tab of the [Log Analytics Workspace Insights workbook](log-analytics-workspace-insights-overview.md). On the **Usage Dashboard**, you can easily see:
-- Which data tables are ingesting the most data volume in the main table, -- What are the top resources contributing data, and -- What is the trend of data ingestion.-
-You can pivot to the **Additional Queries** to easily execution more queries useful to understanding your data patterns.
-
-Learn more about the [capabilities of the Usage tab](log-analytics-workspace-insights-overview.md#usage-tab).
-
-While this workbook can answer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
-
-## Understanding ingested data volume
-
-On the **Usage and Estimated Costs** page, the *Data ingestion per solution* chart shows the total volume of data sent and how much is being sent by each solution. You can determine trends like whether the overall data usage (or usage by a particular solution) is growing, remaining steady, or decreasing.
-
-### Data volume for specific events
-
-To look at the size of ingested data for a particular set of events, you can query the specific table (in this example `Event`) and then restrict the query to the events of interest (in this example event ID 5145 or 5156):
-
-```kusto
-Event
-| where TimeGenerated > startofday(ago(31d)) and TimeGenerated < startofday(now())
-| where EventID == 5145 or EventID == 5156
-| where _IsBillable == true
-| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d)
-```
-
-Note that the clause `where _IsBillable = true` filters out data types from certain solutions for which there is no ingestion charge. [Learn more](./log-standard-columns.md#_isbillable) about `_IsBillable`.
-
-### Data volume by solution
-
-The query used to view the billable data volume by solution over the last month (excluding the last partial day) can be built using the [Usage](/azure/azure-monitor/reference/tables/usage) data type as:
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
-| render columnchart
-```
-
-The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When using the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
-
-### Data volume by type
-
-You can drill in further to see data trends for by data type:
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
-| render columnchart
-```
-
-Or to see a table by solution and type for the last month,
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000 by Solution, DataType
-| sort by Solution asc, DataType asc
-```
-
-### Data volume by computer
-
-The **Usage** data type doesn't include information at the computer level. To see the **size** of ingested billable data per computer, use the **_BilledSize** [property](./log-standard-columns.md#_billedsize), which provides the size in bytes:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize BillableDataBytes = sum(_BilledSize) by computerName
-| sort by BillableDataBytes desc nulls last
-```
-
-The **_IsBillable** [property](./log-standard-columns.md#_isbillable) specifies whether the ingested data will incur charges. The **Usage** type is omitted because this is only for analytics of data trends.
-
-To see the **count** of billable events ingested per computer, use
-
-```kusto
-find where TimeGenerated > ago(24h) project _IsBillable, Computer
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize eventCount = count() by computerName
-| sort by eventCount desc nulls last
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, query on the **Usage** data type.
-
-### Data volume by Azure resource, resource group, or subscription
-
-For data from nodes hosted in Azure, you can get the **size** of ingested data __per computer__, use the [_ResourceId property](./log-standard-columns.md#_resourceid), which provides the full path to the resource:
-
-```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | sort by BillableDataBytes nulls last
-```
-
-For data from nodes hosted in Azure, you can get the **size** of ingested data __per Azure subscription__ by using the **_SubscriptionId** property as:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId | sort by BillableDataBytes nulls last
-```
-
-To get data volume by resource group, you can parse **_ResourceId**:
-
-```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
-| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
-| extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
-| summarize BillableDataBytes = sum(BillableDataBytes) by resourceGroup | sort by BillableDataBytes nulls last
-```
-
-If needed, you can also parse the **_ResourceId** more fully:
-
-```Kusto
-| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
- resourceGroup "/providers/" provider "/" resourceType "/" resourceName
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resouce group, or resource name, query on the **Usage** data type.
-
-> [!WARNING]
-> Some of the fields of the **Usage** data type, while still in the schema, have been deprecated and their values are no longer populated.
-> These are **Computer**, as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**).
-
-## Tips for reducing data volume
-
-This table lists some suggestions for reducing the volume of logs collected.
-
-| Source of high data volume | How to reduce data volume |
-| -- | - |
-| Data Collection Rules | The [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. |
-| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
-| Microsoft Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](../../sentinel/azure-sentinel-billing.md) about Sentinel costs and billing. |
-| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier). <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for: <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)). <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10)). <br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10)). <br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10)). <br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10)). <br> - audit removable storage. |
-| Performance counters | Change the [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection. <br> - Reduce the number of performance counters. |
-| Event logs | Change the [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected. <br> - Collect only required event levels. For example, do not collect *Information* level events. |
-| Syslog | Change the [syslog configuration](../agents/data-sources-syslog.md) to: <br> - Reduce the number of facilities collected. <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events. |
-| AzureDiagnostics | Change the [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources that send logs to Log Analytics. <br> - Collect only required logs. |
-| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
-| Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume). |
-| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
-## Create an alert when data collection is high
-
-This section describes how to create an alert when the data volume in the last 24 hours exceeded a specified amount, using Azure Monitor [Log Alerts](../alerts/alerts-unified-log.md).
-
-To alert if the billable data volume ingested in the last 24 hours was greater than 50 GB:
--- **Define alert condition** specify your Log Analytics workspace as the resource target.-- **Alert criteria** specify the following:
- - **Signal Name** select **Custom log search**
- - **Search query** to `Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50`.
- - **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
- - **Time period** of *1440* minutes and **Alert frequency** to every *1440* minutes to run once a day.
-- **Define alert details** specify the following:
- - **Name** to *Billable data volume greater than 50 GB in 24 hours*
- - **Severity** to *Warning*
-
-To be notified when the log alert matches criteria, specify an existing or create a new [action group](../alerts/action-groups.md).
-
-When you receive an alert, use the steps in the above sections about how to troubleshoot why usage is higher than expected.
-
-## Querying for common data types
-
-To dig deeper into the source of data for a particular data type, here are some useful example queries:
-
-+ **Workspace-based Application Insights** resources
- - Learn more at [Manage usage and costs for Application Insights](../app/pricing.md#data-volume-for-workspace-based-application-insights-resources)
-+ **Security** solution
- - `SecurityEvent | summarize AggregatedValue = count() by EventID`
-+ **Log Management** solution
- - `Usage | where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | summarize AggregatedValue = count() by DataType`
-+ **Perf** data type
- - `Perf | summarize AggregatedValue = count() by CounterPath`
- - `Perf | summarize AggregatedValue = count() by CounterName`
-+ **Event** data type
- - `Event | summarize AggregatedValue = count() by EventID`
- - `Event | summarize AggregatedValue = count() by EventLog, EventLevelName`
-+ **Syslog** data type
- - `Syslog | summarize AggregatedValue = count() by Facility, SeverityLevel`
- - `Syslog | summarize AggregatedValue = count() by ProcessName`
-+ **AzureDiagnostics** data type
- - `AzureDiagnostics | summarize AggregatedValue = count() by ResourceProvider, ResourceId`
-
-
-## Understanding nodes sending data
-
-To understand the number of nodes that are reporting heartbeats from the agent each day in the last month, use this query:
-
-```kusto
-Heartbeat
-| where TimeGenerated > startofday(ago(31d))
-| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
-| render timechart
-```
-The get a count of nodes sending data in the last 24 hours, use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize nodes = dcount(computerName)
-```
-
-To get a list of nodes sending any data (and the amount of data sent by each), use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
-```
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
-
-### Nodes billed by the legacy Per Node pricing tier
-
-The [legacy Per Node pricing tier](#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending **billed data types** (some data types are free). To do this, use the [_IsBillable property](./log-standard-columns.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the count of computers with billed data per hour:
-
-```kusto
-find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
-| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| where _IsBillable == true
-| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
-| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
-| sort by day asc
-```
-
-The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query.
-If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the where clause in the above query. Finally, there's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
-
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
-
-### Getting Security and Automation node counts
-
-To see the number of distinct Security nodes, you can use the query:
-
-```kusto
-union
-(
- Heartbeat
- | where (Solutions has 'security' or Solutions has 'antimalware' or Solutions has 'securitycenter')
- | project Computer
-),
-(
- ProtectionStatus
- | where Computer !in~
- (
- (
- Heartbeat
- | project Computer
- )
- )
- | project Computer
-)
-| distinct Computer
-| project lowComputer = tolower(Computer)
-| distinct lowComputer
-| count
-```
-
-To see the number of distinct Automation nodes, use the query:
-
-```kusto
- ConfigurationData
- | where (ConfigDataType == "WindowsServices" or ConfigDataType == "Software" or ConfigDataType =="Daemons")
- | extend lowComputer = tolower(Computer) | summarize by lowComputer
- | join (
- Heartbeat
- | where SCAgentChannel == "Direct"
- | extend lowComputer = tolower(Computer) | summarize by lowComputer, ComputerEnvironment
- ) on lowComputer
- | summarize count() by ComputerEnvironment | sort by ComputerEnvironment asc
-```
-
-## Evaluating the legacy Per Node pricing tier
-
-The decision of whether workspaces with access to the legacy **Per Node** pricing tier are better off in that tier or in a current **Pay-As-You-Go** or **Commitment Tier** is often difficult for customers to assess. This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier.
-
-To facilitate this assessment, the following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last seven days, and for each day, it evaluates which pricing tier would have been optimal. To use the query, you need to specify:
--- Whether the workspace is using Microsoft Defender for Cloud by setting **workspaceHasSecurityCenter** to **true** or **false**. -- Update the prices if you have specific discounts.-- Specify the number of days to look back and analyze by setting **daysToEvaluate**. This is useful if the query is taking too long trying to look at seven days of data.-
-Here is the pricing tier recommendation query:
-
-```kusto
-// Set these parameters before running query.
-// For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
-// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
-let daysToEvaluate = 7; // Enter number of previous days to analyze (reduce if the query is taking too long)
-let workspaceHasSecurityCenter = false; // Specify if the workspace has Defender for Cloud (formerly known as Azure Security Center)
-let PerNodePrice = 15.; // Montly price per monitored node
-let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier
-let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/)
-let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier
-let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier
-let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier
-let CommitmentTier400Price = 704.; // Enter your price for the 400 GB/day commitment tier
-let CommitmentTier500Price = 865.; // Enter your price for the 500 GB/day commitment tier
-let CommitmentTier1000Price = 1700.; // Enter your price for the 1000 GB/day commitment tier
-let CommitmentTier2000Price = 3320.; // Enter your price for the 2000 GB/day commitment tier
-let CommitmentTier5000Price = 8050.; // Enter your price for the 5000 GB/day commitment tier
-//
-let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]);
-let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
-let EndDate = startofday(now());
-union *
-| where TimeGenerated >= StartDate and TimeGenerated < EndDate
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize nodesPerHour = dcount(computerName) by bin(TimeGenerated, 1h)
-| summarize nodesPerDay = sum(nodesPerHour)/24. by day=bin(TimeGenerated, 1d)
-| join kind=leftouter (
- Heartbeat
- | where TimeGenerated >= StartDate and TimeGenerated < EndDate
- | where Computer != ""
- | summarize ASCnodesPerHour = dcount(Computer) by bin(TimeGenerated, 1h)
- | extend ASCnodesPerHour = iff(workspaceHasSecurityCenter, ASCnodesPerHour, 0)
- | summarize ASCnodesPerDay = sum(ASCnodesPerHour)/24. by day=bin(TimeGenerated, 1d)
-) on day
-| join (
- Usage
- | where TimeGenerated >= StartDate and TimeGenerated < EndDate
- | where IsBillable == true
- | extend NonSecurityData = iff(DataType !in (SecurityDataTypes), Quantity, 0.)
- | extend SecurityData = iff(DataType in (SecurityDataTypes), Quantity, 0.)
- | summarize DataGB=sum(Quantity)/1000., NonSecurityDataGB=sum(NonSecurityData)/1000., SecurityDataGB=sum(SecurityData)/1000. by day=bin(StartTime, 1d)
-) on day
-| extend AvgGbPerNode = NonSecurityDataGB / nodesPerDay
-| extend OverageGB = iff(workspaceHasSecurityCenter,
- max_of(DataGB - 0.5*nodesPerDay - 0.5*ASCnodesPerDay, 0.),
- max_of(DataGB - 0.5*nodesPerDay, 0.))
-| extend PerNodeDailyCost = nodesPerDay * PerNodePrice / 31. + OverageGB * PerNodeOveragePrice
-| extend billableGB = iff(workspaceHasSecurityCenter,
- (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)), DataGB )
-| extend PerGBDailyCost = billableGB * PerGBPrice
-| extend CommitmentTier100DailyCost = CommitmentTier100Price + max_of(billableGB - 100, 0.)* CommitmentTier100Price/100.
-| extend CommitmentTier200DailyCost = CommitmentTier200Price + max_of(billableGB - 200, 0.)* CommitmentTier200Price/200.
-| extend CommitmentTier300DailyCost = CommitmentTier300Price + max_of(billableGB - 300, 0.)* CommitmentTier300Price/300.
-| extend CommitmentTier400DailyCost = CommitmentTier400Price + max_of(billableGB - 400, 0.)* CommitmentTier400Price/400.
-| extend CommitmentTier500DailyCost = CommitmentTier500Price + max_of(billableGB - 500, 0.)* CommitmentTier500Price/500.
-| extend CommitmentTier1000DailyCost = CommitmentTier1000Price + max_of(billableGB - 1000, 0.)* CommitmentTier1000Price/1000.
-| extend CommitmentTier2000DailyCost = CommitmentTier2000Price + max_of(billableGB - 2000, 0.)* CommitmentTier2000Price/2000.
-| extend CommitmentTier5000DailyCost = CommitmentTier5000Price + max_of(billableGB - 5000, 0.)* CommitmentTier5000Price/5000.
-| extend MinCost = min_of(
- PerNodeDailyCost,PerGBDailyCost,CommitmentTier100DailyCost,CommitmentTier200DailyCost,
- CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost)
-| extend Recommendation = case(
- MinCost == PerNodeDailyCost, "Per node tier",
- MinCost == PerGBDailyCost, "Pay-as-you-go tier",
- MinCost == CommitmentTier100DailyCost, "Commitment tier (100 GB/day)",
- MinCost == CommitmentTier200DailyCost, "Commitment tier (200 GB/day)",
- MinCost == CommitmentTier300DailyCost, "Commitment tier (300 GB/day)",
- MinCost == CommitmentTier400DailyCost, "Commitment tier (400 GB/day)",
- MinCost == CommitmentTier500DailyCost, "Commitment tier (500 GB/day)",
- MinCost == CommitmentTier1000DailyCost, "Commitment tier (1000 GB/day)",
- MinCost == CommitmentTier2000DailyCost, "Commitment tier (2000 GB/day)",
- MinCost == CommitmentTier5000DailyCost, "Commitment tier (5000 GB/day)",
- "Error"
-)
-| project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost,
- CommitmentTier100DailyCost, CommitmentTier200DailyCost, CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost, Recommendation
-| sort by day asc
-//| project day, Recommendation // Comment this line to see details
-| sort by day asc
-```
-
-This query isn't an exact replication of how usage is calculated, but it provides pricing tier recommendations in most cases.
-
-> [!NOTE]
-> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-
-<a name="allocations"></a>
-
-## Viewing data allocation benefits
-
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill). Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
-
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
-
-## Late-arriving data
-
-Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
-
-To diagnose late-arriving data issues, use the **_TimeReceived** column ([learn more](./log-standard-columns.md#_timereceived)) in addition to the **TimeGenerated** column. **_TimeReceived** is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud. For example, when using the **Usage** records, you have observed high ingested data volumes of **W3CIISLog** data on May 2, 2021, here is a query that identifies the timestamps on this ingested data:
-
-```Kusto
-W3CIISLog
-| where TimeGenerated > datetime(1970-01-01)
-| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
-| where _IsBillable == true
-| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
-| sort by TimeGenerated asc
-```
-
-The `where TimeGenerated > datetime(1970-01-01)` statement is present only to provide the clue to the Log Analytics user interface to look over all data.
-
-## Data transfer charges using Log Analytics
-
-Sending data to Log Analytics might incur data bandwidth charges. However, that's limited to Virtual Machines where a Log Analytics agent is installed and doesn't apply when using Diagnostics settings or with other connectors that are built in to Microsoft Sentinel. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is very small compared to the costs for Log Analytics data ingestion. So, controlling costs for Log Analytics needs to focus on your [ingested data volume](#understanding-ingested-data-volume).
-
-## Troubleshooting why Log Analytics is no longer collecting data
-
-If you're on the legacy Free pricing tier and have sent more than 500 MB of data in a day, data collection stops for the rest of the day. Reaching the daily limit is a common reason that Log Analytics stops collecting data, or data appears to be missing. Log Analytics creates an **Operation** type event when data collection starts and stops. Run the following query in search to check whether you're reaching the daily limit and missing data:
-
-```kusto
-Operation | where OperationCategory == 'Data Collection Status'
-```
-
-When data collection stops, the **OperationStatus** is **Warning**. When data collection starts, the **OperationStatus** is **Succeeded**. The following table lists reasons that data collection stops and a suggested action to resume data collection.
-
-|Reason collection stops| Solution|
-|--||
-|Daily cap of your workspace was reached|Wait for collection to automatically restart, or increase the daily data volume limit described in manage the maximum daily data volume. The daily cap reset time is shows on the **Daily Cap** page. |
-| Your workspace has hit the [Data Ingestion Volume Rate](../service-limits.md#log-analytics-workspaces) | The default ingestion volume rate limit for data sent from Azure resources using diagnostic settings is approximately 6 GB/min per workspace. This is an approximate value because the actual size can vary between data types, depending on the log length and its compression ratio. This limit doesn't apply to data that's sent from agents or the Data Collector API. If you send data at a higher rate to a single workspace, some data is dropped, and an event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be exceeded. If your ingestion volume continues to exceed the rate limit or you are expecting to reach it sometime soon, you can request an increase to your workspace by sending an email to LAIngestionRate@microsoft.com or by opening a support request. The event to look for that indicates a data ingestion rate limit can be found by the query `Operation | where OperationCategory == "Ingestion" | where Detail startswith "The rate of data crossed the threshold"`. |
-|Daily limit of legacy Free pricing tier reached |Wait until the following day for collection to automatically restart, or change to a paid pricing tier.|
-|Azure subscription is in a suspended state due to:<br> Free trial ended<br> Azure pass expired<br> Monthly spending limit reached (such as on an MSDN or Visual Studio subscription)|Convert to a paid subscription<br> Remove limit, or wait until limit resets|
-
-To be notified when data collection stops, use the steps described in the [Alert when daily cap is reached](#alert-when-daily-cap-is-reached) section. To configure an e-mail, webhook, or runbook action for the alert rule, use the steps described in [create an action group](../alerts/action-groups.md).
-
-## Limits summary
-
-There are additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. These are documented at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
--
-## Next steps
--- See [Log searches in Azure Monitor Logs](../logs/log-query-overview.md) to learn how to use the search language. You can use search queries to perform additional analysis on the usage data.-- Use the steps described in [create a new log alert](../alerts/alerts-metric.md) to be notified when a search criteria is met.-- Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers.-- To configure an effective event collection policy, review [Microsoft Defender for Cloud filtering policy](../../security-center/security-center-enable-data-collection.md).-- Change [performance counter configuration](../agents/data-sources-performance-counters.md).-- To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md
Note, after reaching the set limit, your data collection will automatically stop
Recommended Actions: * Check _LogOperation table for collection stopped and collection resumed events.</br> `_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Data collection"`
-* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached.
+* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached.
* Data collected after the daily collection limit is reached will be lost, use ΓÇÿworkspace insightsΓÇÖ blade to review usage rates from each source.
-Or, you can decide to ([Manage your maximum daily data volume](./manage-cost-storage.md#manage-your-maximum-daily-data-volume) \ [change the pricing tier](./manage-cost-storage.md#changing-pricing-tier) to one that will suite your collection rates pattern).
-* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection resumed" Operation event.
+Or, you can decide to ([Manage your maximum daily data volume](daily-cap.md) \ [change the pricing tier](cost-logs.md#commitment-tiers) to one that will suite your collection rates pattern).
+* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection resumed" Operation event.
#### Operation: Ingestion rate "The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped."
Recommended Actions:
* Check _LogOperation table for ingestion rate event `_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Ingestion rate"` Note: Operation table in the workspace every 6 hours while the threshold continues to be exceeded.
-* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached.
+* [Create an alert](daily-cap.md#alert-when-daily-cap-is-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached.
* Data collected while ingestion rate reached 100% will be dropped and lost. 'workspace insights' blade to review your usage patterns and try to reduce them.</br> For further information: </br> [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) </br>
-[Manage usage and costs for Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-is-reached)
+[Analyze usage in Log Analytics workspace](analyze-usage.md)
#### Operation: Maximum table column count
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)-- [Manage usage and costs for Application Insights](app/pricing.md)
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Title: Monitor usage and estimated costs in Azure Monitor
-description: Get an overview of the process of using the page for Azure Monitor usage and estimated costs.
-
+ Title: Azure Monitor cost and usage
+description: Overview of how Azure Monitor is billed and how to estimate and analyze billable usage.
- Previously updated : 10/28/2019- Last updated : 03/28/2022
-# Monitor usage and estimated costs in Azure Monitor
+# Azure Monitor cost and usage
+This **article** describes the different ways that Azure Monitor charges for usage, how to evaluate charges on your Azure bill, and how to estimate charges to monitor your entire environment.
-This article describes how to view usage and estimated costs across multiple Azure monitoring features.
+## Pricing model
+Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use.
+Features of Azure Monitor that are enabled by default do not incur any charge. This includes collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
-## Azure Monitor pricing model
+Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-The basic Azure Monitor billing model is a cloud-friendly, consumption-based pricing (pay-as-you-go). You pay for only what you use. Pricing details are available for [alerting, metrics, and notifications](https://azure.microsoft.com/pricing/details/monitor/); [Log Analytics](https://azure.microsoft.com/pricing/details/log-analytics/); and [Application Insights](https://azure.microsoft.com/pricing/details/application-insights/).
-In addition to the pay-as-you-go model for log data, Azure Monitor Log Analytics has Commitment Tiers. They enable you to save as much as 30 percent compared to the pay-as-you-go pricing. Commitment Tiers start at 100 gigabytes (GB) a day. Any usage above the Commitment Tier will be billed at the same price per gigabyte as the Commitment Tier. [Learn more about Commitment Tier pricing](https://azure.microsoft.com/pricing/details/monitor/).
+| Type | Description |
+|:|:|
+| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application insights resources. This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Resource Logs | [Diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
+| Custom metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of notification used in response. |
+| Multi-step web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated.
-Some customers have access to [legacy Log Analytics pricing tiers](logs/manage-cost-storage.md#legacy-pricing-tiers) and the [legacy Enterprise Application Insights pricing tier](app/pricing.md#legacy-enterprise-per-node-pricing-tier).
+## Data transfer charges
+Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
-## Azure Monitor costs
+## Estimate Azure Monitor usage and costs
+If you're new to Azure Monitor, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter *Azure Monitor*, and then select the **Azure Monitor** tile. The pricing calculator will help you estimate your likely costs based on your expected utilization.
-There are two phases for understanding costs: estimating costs when you're considering Azure Monitor as your monitoring solution, and then tracking actual costs after deployment.
+The bulk of your costs will typically be from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect since they'll vary significantly based on your configuration. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
-### Estimate the costs to manage your environment
+Following is basic guidance that you can use for common resources:
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Azure Monitor. Start by entering **Azure Monitor** in the **Search** box, and then selecting the **Azure Monitor** tile. Scroll down the page to **Azure Monitor**, and select one of the options from the **Type** dropdown list:
+- **Virtual machines.** With typical monitoring enabled, a virtual machine will generate between 1 GB to 3 GB of data per month. This is highly dependent on the configuration of your agents.
+- **Application Insights.** See the following section for different methods to estimate data from your applications.
+- **Container insights.** See [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster) for guidance on estimating data for your ASK cluster.
-- **Metrics queries and Alerts** -- **Log Analytics**-- **Application Insights**
+## Estimate application usage
+There are two methods that you can use to estimate the amount of data from an application monitored with Application Insights.
+
+### Learn from what similar applications collect
+In the Azure Monitoring Pricing calculator for Application Insights, enable **Estimate data volume based on application activity** which allows you to provide inputs about your application. The calculator will then tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
-In each of these types, the pricing calculator will help you estimate your likely costs based on your expected utilization.
+### Data collection when using sampling
+With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
-For example, with Log Analytics, you can enter the number of virtual machines (VMs) and the gigabytes of data that you expect to collect from each VM. Typically, 1 GB to 3 GB of data per month is ingested from an Azure VM. If you're already evaluating Azure Monitor Logs, you can use your data statistics from your own environment. You can determine the [number of monitored VMs](logs/manage-cost-storage.md#understanding-nodes-sending-data) and the [volume of data that your workspace is ingesting](logs/manage-cost-storage.md#understanding-ingested-data-volume).
+For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
-For Application Insights, if you enable the **Estimate data volume based on application activity** functionality, you can provide inputs about your application (requests per month and page views per month, if you'll collect client-side telemetry). Then the calculator will tell you the median and 90th percentile amount of data that similar applications collect.
-These applications span the range of Application Insights configurations. For example, some have default sampling, some have no sampling, and some have custom sampling. So you still have the control to reduce the volume of data that you ingest to far below the median level by using sampling. But this is a starting point to understand what similar customers are seeing. [Learn more about estimating costs for Application Insights](app/pricing.md#estimating-the-costs-to-manage-your-application).
+## Viewing Azure Monitor usage and charges
+There are two primary tools to view and analyze your Azure Monitor billing and estimated charges.
-### Track usage and costs
+- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool that you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.
+- [Usage and Estimated Costs](#usage-and-estimated-costs) provides a listing of monthly charges for different Azure Monitor features. This is particularly useful for Log Analytics workspaces where it helps you to select your pricing tier by showing how your cost would be different at different tiers.
-It's important to understand and track your usage after you start using Azure Monitor. A rich set of tools can help facilitate this tracking.
-#### Azure Cost Management + Billing
+## Azure Cost Management + Billing
+Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
-Azure provides useful functionality in the [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. After you open the hub, select **Cost Management** and select the [scope](../cost-management-billing/costs/understand-work-scopes.md) (the set of resources to investigate). You might need additional access to Cost Management data ([learn more](../cost-management-billing/costs/assign-access-acm-data.md)).
+>[!NOTE]
+>You might need additional access to Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
-To see the Azure Monitor costs for the last 30 days, select the **Daily Costs** tile, select **Last 30 days** under **Relative dates**, and add a filter that selects the service names:
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**:
- **Azure Monitor** - **Application Insights** - **Log Analytics** - **Insight and Analytics**
-The result is a view like the following example:
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you may want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
![Screenshot that shows Azure Cost Management with cost information.](./media/usage-estimated-costs/010.png)
-You can drill in from this accumulated cost summary to get the finer details in the **Cost by resource** view. In the current pricing tiers, Azure log data is charged on the same set of meters whether it originates from Log Analytics or Application Insights.
+>[!NOTE]
+>Alternatively, you can go to the **Overview** page of a Log Analytics workspace or Application Insights resource and click **View Cost** in the upper right corner of the **Essentials** section. This will launch the **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
+> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for Log Analytics workspace.":::
+
+### Download usage
+To gain more understanding of your usage, you can download your usage from the Azure portal and see usage per Azure resource in the downloaded spreadsheet. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) for a tutorial, including how to automatically create a daily report that you can use for regular analysis.
+
+Usage from your Log Analytics workspaces can be found by first filtering on the **Meter Category** column to show *Log Analytics*, *Insight and Analytics* (used by some of the legacy pricing tiers), and *Azure Monitor* (used by commitment tier pricing tiers). Add a filter on the *Instance ID* column for *contains workspace* or *contains cluster*. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column.
+
+### Application Insights meters
+Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category**, because there's a single log back end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column. See [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md) for more details.
+
+To separate costs from your Log Analytics or Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
-To separate costs from your Log Analytics or Application Insights usage, you can add a filter on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
+## Usage and estimated costs
+You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+### Log Analytics workspace
+To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal.
-More details about your usage are available if you [download your usage from the Azure portal](../cost-management-billing/understand/download-azure-daily-usage.md). In the downloaded Excel spreadsheet, you can see usage per Azure resource per day. You can find usage from your Application Insights resources by filtering on the **Meter Category** column to show **Application Insights** and **Log Analytics**. Then add a **contains microsoft.insights/components** filter on the **Instance ID** column.
-Most Application Insights usage is reported on meters with **Log Analytics** for **Meter Category**, because there's a single log back end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column. More details are available to help you [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
+This view includes the following:
-#### Usage and estimated costs
+A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br>
+B. Estimated monthly charges using different commitment tiers.<br>
+C. Billable data ingestion by solution from the past 31 days.
-Another option for viewing your Azure Monitor usage is the **Usage and estimated costs** page in the Monitor hub. This page shows the usage of core monitoring features such as [alerting, metrics, and notifications](https://azure.microsoft.com/pricing/details/monitor/); [Azure Log Analytics](https://azure.microsoft.com/pricing/details/log-analytics/); and [Azure Application Insights](https://azure.microsoft.com/pricing/details/application-insights/). For customers on the pricing plans available before April 2018, this page also includes Log Analytics usage purchased through the Insights and Analytics offer.
+To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
-On this page, users can view their resource usage for the past 31 days, aggregated per subscription. Drill-ins show usage trends over the 31-day period. A lot of data needs to come together for this estimate, so please be patient as the page loads.
-This example shows monitoring usage and an estimate of the resulting costs:
+### Application insights
+To learn about your usage trends for your classic Application Insights resource, select **Usage and Estimated Costs** from the **Applications** menu in the Azure portal.
-![Screenshot of the Azure portal that shows usage and estimated costs.](./media/usage-estimated-costs/001.png)
-Select the link in the **MONTHLY USAGE** column to open a chart that shows usage trends over the last 31-day period:
+This view includes the following:
-![Screenshot that shows a bar chart for included data volume per node.](./media/usage-estimated-costs/002.png)
+A. Estimated monthly charges based on usage from the past month.<br>
+B. Billable data ingestion by table from the past month.
+
+To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
++
+## Viewing data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details. Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
> [!NOTE]
-> Using **Cost Management** in the **Azure Cost Management + Billing** hub is the preferred approach to broadly understanding monitoring costs. The **Usage and estimated costs** experiences for [Log Analytics](logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs) and [Application Insights](app/pricing.md#understand-your-usage-and-estimate-costs) provide deeper insights for each of those parts of Azure Monitor.
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+ ## Operations Management Suite subscription entitlements
-Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for [Log Analytics](https://www.microsoft.com/cloud-platform/operations-management-suite) and [Application Insights](app/pricing.md). For customers to receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription:
+Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-- Log Analytics workspaces should use the Per-Node (OMS) pricing tier.-- Application Insights resources should use the Enterprise pricing tier.
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must be use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration. +
+Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+ > [!TIP] > If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier. > ## Next steps
-Get cost information for specific components of Azure Monitor:
--- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) describes how to control your costs by changing your data retention period, and how to analyze and alert on your data usage.-- [Manage usage and costs for Application Insights](app/pricing.md) describes how to analyze data usage in Application Insights.
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
The following table lists the steps that must be performed for this configuratio
Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. ## Create and prepare a Log Analytics workspace
-You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md).
+You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve.
azure-monitor Vminsights Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-overview.md
VM insights guest health allows you to view the health of virtual machines based
See [Enable VM insights guest health (preview)](vminsights-health-enable.md) for details on enabling the guest health feature and onboarding virtual machines. ## Pricing
-There is no direct cost for the guest health feature, but there is a cost for ingestion and storage of health state data in the Log Analytics workspace. All data is stored in the *HealthStateChangeEvent* table. See [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md) for details on pricing models and costs.
+There is no direct cost for the guest health feature, but there is a cost for ingestion and storage of health state data in the Log Analytics workspace. All data is stored in the *HealthStateChangeEvent* table. See [Azure Monitor Logs pricing details](../logs/cost-logs.md) for details on pricing models and costs.
## View virtual machine health The **Guest VM Health** column in the **Get Started** page gives you a quick view of the health of each virtual machine in a particular subscription or resource group. The current health of each virtual machine is displayed while icons for each group show the number of virtual machines currently in each state in that group.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
- [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) ## December, 2021
This article lists significant changes to Azure Monitor documentation.
**Updated articles** - [Troubleshooting no data - Application Insights for .NET/.NET Core](app/asp-net-troubleshoot-no-data.md)-- [Manage usage and costs for Application Insights](app/pricing.md) - [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md) - [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](app/opentelemetry-enable.md) - [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)
This article lists significant changes to Azure Monitor documentation.
- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md) - [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) ### Virtual Machines
This article lists significant changes to Azure Monitor documentation.
**Updated articles** - [Log Analytics tutorial](logs/log-analytics-tutorial.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) - [Use Azure Private Link to securely connect networks to Azure Monitor](logs/private-link-security.md) - [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md) - [Monitor health of Log Analytics workspace in Azure Monitor](logs/monitor-workspace.md)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 03/18/2022 Last updated : 04/07/2022 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
* [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html) * [Distributed training in Azure: Lane detection - Solution design](https://www.netapp.com/media/32427-tr-4896-design.pdf) * [Distributed training in Azure: Click-Through Rate Prediction ΓÇô Solution design](https://docs.netapp.com/us-en/netapp-solutions/ai/aks-anf_introduction.html)
+* [How to use Azure Machine Learning with Azure NetApp Files](https://github.com/csiebler/azureml-with-azure-netapp-files)
### Education
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/advance-notifications.md
Previously updated : 03/25/2022 Last updated : 04/04/2022 # Advance notifications for planned maintenance events (Preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
resources
| extend p = parse_json(properties) | mvexpand d = p.value | where d has 'notificationId' and d.notificationId == 'LNPN-R9Z'
- | project resource = tolower(name), status = d.status
+ | project resource = tolower(name), status = d.status, resourceGroup, location, startTimeUtc = d.startTimeUtc, endTimeUtc = d.endTimeUtc, impactType = d.impactType
) on resource
-|project resource, status
+| project resource, status, resourceGroup, location, startTimeUtc, endTimeUtc, impactType
``` For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Previously updated : 03/07/2022 Last updated : 04/04/2022 # Maintenance window
servicehealthresources
| extend impact = properties.Impact | extend impactedService = parse_json(impact[0]).ImpactedService | where impactedService =~ 'SQL Database'
-| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
-| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime = todatetime(tolong(properties.ImpactMitigationTime))
+| where eventType == 'PlannedMaintenance'
+| order by impactStartTime desc
``` To check for the maintenance events for all managed instances in your subscription, use the following sample query in Azure Resource Graph Explorer:
servicehealthresources
| extend impact = properties.Impact | extend impactedService = parse_json(impact[0]).ImpactedService | where impactedService =~ 'SQL Managed Instance'
-| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
-| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime = todatetime(tolong(properties.ImpactMitigationTime))
+| where eventType == 'PlannedMaintenance'
+| order by impactStartTime desc
``` For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
azure-video-analyzer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-support.md
Previously updated : 02/02/2022 Last updated : 04/07/2022 # Language support in Video Analyzer for Media
This section describes language support in Video Analyzer for Media.
The following insights are translated, otherwise will remain in English: - Transcript
- - OCR
- Keywords - Topics - Labels
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 04/04/2022 Last updated : 04/07/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former Video Indexer) developments, this article provides you with information about:
+* [Important notice](#upcoming-critical-changes) about planned changes
* The latest releases * Known issues * Bug fixes * Deprecated functionality
-## March 2022
+## Upcoming critical changes
+
+> [!Important]
+> This section describes a critical upcoming change for the `Upload-Video` API.
++
+### Upload-Video API
+
+In the past, the `Upload-Video` API was tolerant to calls to upload a video from a URL where an empty multipart form body was provided in the C# code, such as:
+
+```csharp
+var content = new MultipartFormDataContent();
+var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
+```
+
+In the coming weeks, our service will fail requests of this type.
+
+In order to upload a video from a URL, change your code to send null in the request body:
+
+```csharp
+var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null);
+```
+
+## March 2022 release updates
### Closed Captioning files now support including speakersΓÇÖ attributes
To enable the dark mode open the settings panel and toggle on the **Dark Mode**
:::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting":::
-## December 2020
+## December 2020
### Video Analyzer for Media deployed in the Switzerland West and Switzerland North
Multiple advancements announced at IBC 2019:
The topic inferencing model now supports deeper granularity of the IPTC taxonomy. Read full details at [Azure Media Services new AI-powered innovation](https://azure.microsoft.com/blog/azure-media-services-new-ai-powered-innovation/).
-## August 2019
+## August 2019 updates
### Video Analyzer for Media deployed in UK South
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/create-placement-policy.md
Title: Create placement policy description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. Previously updated : 12/16/2021 Last updated : 04/07/2022 #Customer intent: As an Azure service administrator, I want to control the placement of virtual machines on hosts within a cluster in my private cloud.
The assignment of hosts isn't required or permitted for this policy type.
- **VM-VM Affinity** policies instruct DRS to try to keeping the specified VMs together on the same host. It's useful for performance reasons, for example. -- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in scenarios where a problem with one host doesn't affect multiple VMs within the same policy.
+- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in availability scenarios where a problem with one host doesn't affect multiple VMs within the same policy.
### VM-Host policies
You can delete a placement policy and its corresponding DRS rule.
Use the vSphere Client to monitor the operation of a placement policy's corresponding DRS rule.
-As a holder of the cloudadmin role, you can view, but not edit, the DRS rules created by a placement policy on the cluster's Configure tab under VM/Host Rules. It lets you view additional information, such as if the DRS rules are in a conflict state.
+As a holder of the CloudAdmin role, you can view, but not edit, the DRS rules created by a placement policy on the cluster's Configure tab under VM/Host Rules. It lets you view additional information, such as if the DRS rules are in a conflict state.
Additionally, you can monitor various DRS rule operations, such as recommendations and faults, from the cluster's Monitor tab.
For most workloads this is not necessary and may cause unintended performance im
1. Navigate to Manage Placement policies and click Restrict VM movement. 1. Select the VM or VMs you want to restrict, then click Select. 1. The VM or VMS you selected appears in the VMs with restricted movement tab.
-In the vSphere client, a VM override will be created to set DRS to partially automated for that VM.
+In the vSphere Client, a VM override will be created to set DRS to partially automated for that VM.
DRS will no longer migrate the VM automatically. Manual vMotion of the VM and automatic initial placement of the VM will continue to function.
Yes, and no. While vSphere DRS implements the current set of policies, we have s
Azure VMware Solution provides a VMware private cloud in Azure. In this managed VMware infrastructure, Microsoft manages the clusters, hosts, datastores, and distributed virtual switches in the private cloud. At the same time, the tenant is responsible for managing the workloads deployed on the private cloud. As a result, the tenant administering the private cloud [does not have the same set of privileges](concepts-identity.md) as available to the VMware administrator in an on-premises deployment.
-Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in a VMware Cloud environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure VMware Solution portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, they also ensure that the rules don't impact the day-to-day infrastructure maintenance and operation activities.
+Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in an Azure VMware Solution environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure VMware Solution portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, they also ensure that the rules don't impact the day-to-day infrastructure maintenance and operation activities.
### What is the difference between the VM-Host affinity policy and Restrict VM movement?
The VM-Host **MUST** rules aren't supported because they block maintenance opera
VM-Host **SHOULD** rules are preferential rules, where vSphere DRS tries to accommodate the rules to the extent possible. Occasionally, vSphere DRS may vMotion VMs subjected to the VM-Host **SHOULD** rules to ensure that the workloads get the resources they need. It's a standard vSphere DRS behavior, and the Placement policies feature does not change the underlying vSphere DRS behavior.
-If you create conflicting rules, those conflicts may show up on the vCenter, and the newly defined rules may not take effect. It's a standard vSphere DRS behavior, the logs for which can be observed in the vCenter.
+If you create conflicting rules, those conflicts may show up on the vCenter Server, and the newly defined rules may not take effect. It's a standard vSphere DRS behavior, the logs for which can be observed in the vCenter Server.
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 06/04/2021 Last updated : 04/05/2022+++ # Troubleshoot the Microsoft Azure Recovery Services (MARS) agent
We recommend that you check the following before you start troubleshooting Micro
**Error message**: Invalid vault credentials provided. The file is either corrupted or does not have the latest credentials associated with recovery service. (ID: 34513)
-| Cause | Recommended actions |
+| Causes | Recommended actions |
| | |
-| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <ul><li> If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br/> <li> If the new installation fails, try reinstalling with the new credentials.</ul> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file.
-| **Proxy server/firewall is blocking registration** <br/>or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Take these steps:<br/> <ul><li> Work with your IT team to ensure the system has internet connectivity.<li> If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<li> If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `www.msftncsi.com` <br> `www.msftconnecttest.com` <br> \*.microsoft.com <br> \*.windowsazure.com <br> \*.microsoftonline.com <br>\*.windows.net<br><br>**IP addresses**<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/><li>If you are a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> \*.microsoft.com <br> \*.windowsazure.us <br> \*.microsoftonline.us <br> `*.windows.net` <br> \*.usgovcloudapi.net</li></ul></ul>Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support).
-| **Antivirus software is blocking registration** | If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders: <br/><ul> <li> CBengine.exe <li> CSC.exe<li> The scratch folder. Its default location is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch. <li> The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin.
+| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <br><br>- If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br> - If the new installation fails, try reinstalling with the new credentials. <br><br> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file. |
+| **Proxy server/firewall is blocking registration** <br/>Or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Follow these steps:<br/> <br><br>- Work with your IT team to ensure the system has internet connectivity.<br>- If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<br>- If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `*.microsoft.com` <br> `*.windowsazure.com` <br> `*.microsoftonline.com` <br> `*.windows.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net`<br><br><br>- If you are a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> `*.microsoft.com` <br> `*.windowsazure.us` <br> `*.microsoftonline.us` <br> `*.windows.net` <br> `*.usgovcloudapi.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net` <br><br> Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support). |
+| **Antivirus software is blocking registration** | If you've antivirus software installed on the server, add the exclusion rules to the antivirus scan for: <br><br> - Every file and folder under the *scratch* and *bin* folder locations - `<InstallPath>\Scratch\*` and `<InstallPath>\Bin\*`. <br> - cbengine.exe |
### Additional recommendations
Backup operations could fail if there isn't sufficient shadow copy storage space
### Another process or antivirus software blocking access to cache folder
-If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders:
--- The scratch folder. Its default location is `C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch`-- The bin folder at `C:\Program Files\Microsoft Azure Recovery Services Agent\Bin`-- CBengine.exe-- CSC.exe ## Common issues
backup Backup Azure Troubleshoot Slow Backup Performance Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md
We've seen several instances where other processes in the Windows system have ne
The best recommendation in this scenario is to turn off the other backup program to see whether the backup time for the Azure Backup agent changes. Usually, making sure that multiple backup jobs are not running at the same time is sufficient to prevent them from affecting each other.
-For antivirus programs, we recommend that you exclude the following files and locations:
-
-* C:\Program Files\Microsoft Azure Recovery Services Agent\bin\cbengine.exe as a process
-* C:\Program Files\Microsoft Azure Recovery Services Agent\ folders
-* Scratch location (if you're not using the standard location)
<a id="cause3"></a>
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports description: Create automated tasks to receive periodic reports via email Previously updated : 02/14/2022 Last updated : 04/06/2022
Using the **Email Report** feature available in Backup Reports, you can create a
To configure email tasks via Backup Reports, perform the following steps:
-1. Navigate to **Backup Center** > **Backup Reports** and click on the **Email Report** tab.
+1. Go to **Backup Center** > **Backup Reports** and click on the **Email Report** tab.
2. Create a task by specifying the following information: * **Task Details** - The name of the logic app to be created, and the subscription, resource group, and location in which it should be created. Note that the logic app can query data across multiple subscriptions, resource groups, and locations (as selected in the Report Filters section), but is created in the context of a single subscription, resource group and location. * **Data To Export** - The tab which you wish to export. You can either create a single task app per tab, or email all tabs using a single task, by selecting the **All Tabs** option.
To configure email tasks via Backup Reports, perform the following steps:
## Authorize connections to Azure Monitor Logs and Office 365
-The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You will need to perform a one-time authorization for these two connectors.
+The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You'll need to perform a one-time authorization for these two connectors.
To perform the authorization, follow the steps below:
-1. Navigate to **Logic Apps** in the Azure portal.
-2. Search for the name of the logic app you have created and navigate to the resource.
+1. Go to **Logic Apps** in the Azure portal.
+2. Search for the name of the logic app you've created and go to the resource.
![Logic Apps](./media/backup-azure-configure-backup-reports/logic-apps.png)
To perform the authorization, follow the steps below:
![API Connections](./media/backup-azure-configure-backup-reports/api-connections.png)
-4. You will see two connections with the format `<location>-azuremonitorlogs` and `<location>-office365` - that is, _eastus-azuremonitorlogs_ and _eastus-office365_.
-5. Navigate to each of these connections and select the **Edit API connection** menu item. In the screen that appears, select **Authorize**, and save the connection once authorization is complete.
+4. You'll see two connections with the format `<location>-azuremonitorlogs` and `<location>-office365` - that is, _eastus-azuremonitorlogs_ and _eastus-office365_.
+5. Go to each of these connections and select the **Edit API connection** menu item. In the screen that appears, select **Authorize**, and save the connection once authorization is complete.
![Authorize connection](./media/backup-azure-configure-backup-reports/authorize-connections.png)
-6. To test whether the logic app works after authorization, you can navigate back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
+6. To test whether the logic app works after authorization, you can go back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
## Contents of the email * All the charts and graphs shown in the portal are available as inline content in the email. [Learn more](configure-reports.md) about the information shown in Backup Reports. * The grids shown in the portal are available as *.csv attachments in the email. * The data shown in the email uses all the report-level filters selected by the user in the report, at the time of creating the email task.
-* Tab-level filters such as **Backup Instance Name**, **Policy Name** and so on, are not applied. The only exception to this is the **Retention Optimizations** grid in the **Optimize** tab, where the filters for **Daily**, **Weekly**, **Monthly** and **Yearly** RP retention are applied.
+* Tab-level filters such as **Backup Instance Name**, **Policy Name** and so on, aren't applied. The only exception to this is the **Retention Optimizations** grid in the **Optimize** tab, where the filters for **Daily**, **Weekly**, **Monthly** and **Yearly** RP retention are applied.
* The time range and aggregation type (for charts) are based on the userΓÇÖs time range selection in the reports. For example, if the time range selection is last 60 days (translating to weekly aggregation type), and email frequency is daily, the recipient will receive an email every day with charts spanning data taken over the last 60-day period, with data aggregated at a weekly level. ## Troubleshooting issues
If you aren't receiving emails as expected even after successful deployment of t
### Scenario 1: Receiving neither a successful email nor an error email
-* This issue could be occurring because the Outlook API connector is not authorized. To authorize the connection, follow the authorization steps provided above.
+* This issue could be occurring because the Outlook API connector isn't authorized. To authorize the connection, follow the authorization steps provided above.
-* This issue could also be occurring if you have specified an incorrect email recipient while creating the logic app. To verify that the email recipient has been specified correctly, you can navigate to the logic app in the Azure portal, open the Logic App designer and select email step to see whether the correct email IDs are being used.
+* This issue could also be occurring if you've specified an incorrect email recipient while creating the logic app. To verify that the email recipient has been specified correctly, you can go to the logic app in the Azure portal, open the Logic App designer and select email step to see whether the correct email IDs are being used.
### Scenario 2: Receiving an error email that says that the logic app failed to execute to completion To troubleshoot this issue:
-1. Navigate to the logic app in the Azure portal.
-2. At the bottom of the **Overview** screen, you will see a **Runs History** section. You can open on the latest run and view which steps in the workflow failed. Some possible causes could be:
+1. Go to the logic app in the Azure portal.
+2. At the bottom of the **Overview** screen, you'll see a **Runs History** section. You can open on the latest run and view which steps in the workflow failed. Some possible causes could be:
* **Azure Monitor Logs Connector has not been not authorized**: To fix this issue, follow the authorization steps as provided above. * **Error in the LA query**: In case you have customized the logic app with your own queries, an error in any of the LA queries might be causing the logic app to fail. You can select the relevant step and view the error which is causing the query to run incorrectly.
To ensure you're logged in to the right tenant, you can open _portal.azure.com/<
If the issues persist, contact Microsoft support.
+## Guidance for GCC High users
+
+If you're a user in Azure Government environment using an [Office365 GCC High account](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod), ensure that the email configuration is set correctly. This is because a different endpoint used for authorizing this connection for GCC High users that needs to be explicitly specified. Perform one of the following methods to verify the configuration and set up the logic app to work in GCC High.
+
+**Choose a client:**
+
+# [Azure portal](#tab/portal)
+
+To update the authentication type for the Office 365 connection via the Azure portal, follow these steps:
+
+1. Deploy the logic app task for the required tabs. See the steps in [Getting started](#getting-started).
+
+ Learn about [how to authorize the Azure Monitor Logs connection](#authorize-connections-to-azure-monitor-logs-and-office-365).
+
+1. Once deployed, go to the logic app in the Azure portal and click **Logic app designer** from the menu.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/logic-app-designer-inline.png" alt-text="Screenshot showing to click Logic app designer." lightbox="./media/backup-azure-configure-backup-reports/logic-app-designer-expanded.png":::
+
+1. Locate the places where the Office 365 action is used.
+
+ You'll find two Office 365 actions used, both at the bottom of the flow.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/office-365-change-connection.png" alt-text="Screenshot showing Office 365 change connection.":::
+
+1. Click **Change connection** and click the *information icon*.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/email-information-icon.png" alt-text="Screenshot showing to click information icon.":::
+
+1. A popup opens where you can select the authentication type for GCC High.
+
+Once you select the correct authentication type in all the places where the Office 365 connection is used, the connection should work as expected.
+
+# [Azure Resource Manager (ARM) template](#tab/arm)
+
+You can also directly update the ARM template, which is used for deploying the logic app, to ensure that the GCC High endpoint is used for authorization. Follow these steps:
+
+1. Go to the **Email Report** tab, provide the required inputs, and then click **Submit**.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/view-template-inline.png" alt-text="View email template." lightbox="./media/backup-azure-configure-backup-reports/view-template-expanded.png":::
+
+1. Click **View template**.
+
+ This opens up the ARM template json which you can download and edit.
+
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/edit-template-inline.png" alt-text="Screenshot showing to edit template." lightbox="./media/backup-azure-configure-backup-reports/edit-template-expanded.png":::
+
+1. Locate the *resources* block in the JSON file (specifically the section where a resource of `Microsoft.Web/connections` is deployed with the Office 365 parameters).
+
+ To modify the template to support *GCCHigh*, add the subsection *parameterValueSet* to the properties section of this resource.
+
+ The updated block would look like the below:
+
+ ```json
+ {
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2018-07-01-preview",
+ "name": "[variables('office365ConnectionName')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('resourceTags')]",
+ "properties": {
+ "api": {
+ "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'office365')]"
+ },
+ "parameterValueSet": {
+ "name": "oauthGccHigh",
+ "values": {
+ "token": {
+ "value": "https://logic-apis-usgovvirginia.consent.azure-apihub.us/redirect"
+ }
+ }
+ },
+ "displayName": "office365"
+ }
+ }
+ ```
+1. Once you have the edited template, [deploy this template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template).
+
+1. Once deployed, [authorize the Azure Monitor Logs and Office 365 connections](#authorize-connections-to-azure-monitor-logs-and-office-365).
+++ ## Next steps [Learn more about Backup Reports](./configure-reports.md)
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Set up one or more Log Analytics workspaces to store your Backup reporting data.
To set up a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
-By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Manage usage and costs with Azure Monitor logs](../azure-monitor/logs/manage-cost-storage.md).
+By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
### 2. Configure diagnostics settings for your vaults
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Not all Marketplace images are compatible with the currently available Batch nod
### Node agent SKU
-The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. Essentially, when you create a Virtual Machine Configuration, you first specify the virtual machine image reference, and then you specify the node agent to install on the image. Typically, each node agent SKU is compatible with multiple virtual machine images. To view supported Marketplace VM images with their corresponding node agent SKUs, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images).
+The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. Essentially, when you create a Virtual Machine Configuration, you first specify the virtual machine image reference, and then you specify the node agent to install on the image. Typically, each node agent SKU is compatible with multiple virtual machine images. To view the supported node agent SKUs and virtual machine image compatibilities, you can use the [Azure Batch CLI command](/cli/azure/batch/pool#supported-images):
+
+```azurecli-interactive
+az batch pool supported-images list
+```
+
+For more information, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images).
## Create a Linux pool: Batch Python
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Get-AzResource -ResourceGroupName exampleRG
## Clean up resources
-### Azure CLI
-
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
# [CLI](#tab/CLI)
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
{ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}"); Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you update the subscription info?");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
} stopRecognition.TrySetResult(0);
public static async Task MultiLingualTranslation()
{ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}"); Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you update the subscription info?");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
} stopTranslation.TrySetResult(0);
else if (result->Reason == ResultReason::Canceled)
{ cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl; cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
- cout << "CANCELED: Did you update the subscription info?" << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
} } ```
void MultiLingualTranslation()
{ cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << std::endl; cout << "CANCELED: ErrorDetails=" << e.ErrorDetails << std::endl;
- cout << "CANCELED: Did you update the subscription info?" << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
recognitionEnd.set_value(); }
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Previously updated : 02/16/2022 Last updated : 04/06/2022
To delete a deployment, select the deployment you want to delete and select **De
2. Replace `<OPERATION_ID>` with the `jobId` from the previous step.
-3. Submit the `GET` cURL request in your terminal or command prompt. You'll receive a 202 response and JSON similar to the below, if the request was successful.
-
+3. Submit the `GET` cURL request in your terminal or command prompt. You'll receive a 202 response with the API results if the request was successful.
# [Using the REST API](#tab/rest-api)
First you will need to get your resource key and endpoint
### Submit custom NER task
-1. Start constructing a POST request by updating the following URL with your endpoint.
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze`
-
-2. In the header for the request, add your key to the `Ocp-Apim-Subscription-Key` header.
-
-3. In the JSON body of your request, you will specify The documents you're inputting for analysis, and the parameters for the custom entity recognition task. `project-name` is case-sensitive.
-
- > [!tip]
- > See the [quickstart article](../quickstart.md?pivots=rest-api#submit-custom-ner-task) and [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) for more information about the JSON syntax.
-
- ```json
- {
- "displayName": "MyJobName",
- "analysisInput": {
- "documents": [
- {
- "id": "doc1",
- "text": "This is a document."
- }
- ]
- },
- "tasks": {
- "customEntityRecognitionTasks": [
- {
- "parameters": {
- "project-name": "MyProject",
- "deployment-name": "MyDeploymentName"
- "stringIndexType": "TextElements_v8"
- }
- }
- ]
- }
- }
- ```
-
-4. You will receive a 202 response indicating success. In the response headers, extract `operation-location`.
-`operation-location` is formatted like this:
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId>`
-
- You will use this endpoint in the next step to get the custom recognition task results.
-
-5. Use the URL from the previous step to create a **GET** request to query the status/results of the custom recognition task. Add your key to the `Ocp-Apim-Subscription-Key` header for the request.
-+
+### Get the task results
+ # [Using the client libraries (Azure SDK)](#tab/client)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
See the [application development lifecycle](../overview.md#application-developme
## View the model's evaluation details
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. Select **View model details** from the menu on the left side of the screen.
-
-3. In this page you can only view the successfully trained models. You can click on the model name for more details.
-
-4. You can find the **model-level** evaluation metrics under **Overview**, and the **entity-level** evaluation metrics under **Entity performance metrics**. The confusion matrix for the model is located under **Test set confusion matrix**
-
- > [!NOTE]
- > If you don't find all the entities displayed here, it's because they were not in any of the files within the test set.
-
- :::image type="content" source="../media/model-details.png" alt-text="A screenshot of the model performance metrics in Language Studio" lightbox="../media/model-details.png":::
## Next steps
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
Call recordings are stored temporarily in the same geography that was selected f
## Azure Monitor and Log Analytics
-Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/manage-cost-storage.md).
+Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-archive.md).
## Additional resources
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
|Batch of participants - AddParticipant|200 | |Page size - ListMessages|200 |
+### Operation Limits
+
+| **Operation** | **Bucketed by** | **Limit per 10 seconds** | **Limit per minute** |
+|--|--|--|--|
+|Create chat thread|User|10|-|
+|Delete chat thread|User|10|-|
+|Update chat thread|Chat thread|5|-|
+|Add participants / remove participants|Chat thread|10|30|
+|Get chat thread / List chat threads|User|50|-|
+|Get chat message / List chat messages|User and chat thread|50|-|
+|Get chat message / List chat messages|Chat thread|250|-|
+|Get read receipts|User and chat thread|5|-|
+|Get read receipts|Chat thread|250|-|
+|List chat thread participants|User and chat thread|10|-|
+|List chat thread participants|Chat thread|250|-|
+|Send message / update message / delete message|Chat thread|10|30|
+|Send read receipt|User and chat thread|10|30|
+|Send typing indicator|User and chat thread|5|15|
+|Send typing indicator|Chat thread|10|30|
+ ## Voice and video calling ### Call maximum limitations
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
When initializing, code samples get the node certificate by querying Identity Se
Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their Ledger’s enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger. - **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the Ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a server’s cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isn’t signed by the returned network cert, the client connection should fail (as implemented in the sample code).-- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found here.
+- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found [here](https://github.com/openenclave/openenclave/tree/master/samples/host_verify).
## Next steps
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
This article shows how to use the Request trigger and Response action so that yo
For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests). > [!NOTE]
-> For the **Logic App (Standard)** resource type in single-tenant Azure Logic Apps, Azure AD OAuth is currently
-> unavailable for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
+>
+> In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can
+> use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger
+> by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review
+> [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
+ ## Prerequisites
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM
{ "name": "Custom-Header", "value": "liveness probe"
- }],
- "initialDelaySeconds": 7,
- "periodSeconds": 3
- }
+ }]
+ },
+ "initialDelaySeconds": 7,
+ "periodSeconds": 3
}, { "type": "readiness",
The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM
{ "name": "Custom-Header", "value": "startup probe"
- }],
- "initialDelaySeconds": 3,
- "periodSeconds": 3
- }
+ }]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
}] }] ...
containers:
httpHeaders: - name: Custom-Header value: "liveness probe"
- initialDelaySeconds: 7
- periodSeconds: 3
+ initialDelaySeconds: 7
+ periodSeconds: 3
- type: readiness tcpSocket: port: 8081
containers:
httpHeaders: - name: Custom-Header value: "startup probe"
- initialDelaySeconds: 3
- periodSeconds: 3
+ initialDelaySeconds: 3
+ periodSeconds: 3
... ```
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Last updated 12/08/2021
+adobe-target: true
# Choose an API in Azure Cosmos DB
cosmos-db Supply Chain Traceability Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/supply-chain-traceability-solution.md
Title: Infosys supply chain traceability solution using Azure Cosmos DB Gremllin API
-description: The supply chain traceability graph solution implemented by Infosys uses the Azure Cosmos DB Gremlin API and other Azure services. It provides global supply chain track and trace capability for finished goods.
+ Title: Infosys supply chain traceability solution using Azure Cosmos DB Gremlin API
+description: The Infosys solution for traceability in global supply chains uses the Azure Cosmos DB Gremlin API and other Azure services. It provides track-and-trace capability in graph form for finished goods.
-# Supply chain traceability solution using Azure Cosmos DB Gremlin API
+# Solution for supply chain traceability using the Azure Cosmos DB Gremlin API
[!INCLUDE[appliesto-gremlin-api](../includes/appliesto-gremlin-api.md)]
-This article provides an overview of [traceability graph solutions implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). These solutions use Azure Cosmos DB Gremlin API and other Azure capabilities to provide global supply chain track and trace capability for finished goods.
+This article provides an overview of the [traceability graph solution implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). This solution uses the Azure Cosmos DB Gremlin API and other Azure capabilities to provide a track-and-trace capability for finished goods in global supply chains.
-After reading this article, you will learn:
+In this article, you'll learn:
-* What is traceability in the context of a supply chain?
-* Architecture of a global traceability solution delivered using Azure Capabilities.
-* How the Azure Cosmos DB graph database helps intricate relationships between raw material and finished good in a global supply chain?
-* How does the Azure integration platform services such as API Management, Event Hub help you to integrate diverse supply chain application ecosystems?
-* How can you get help from Infosys to use this solution for your traceability need?
+* What traceability is in the context of a supply chain.
+* The architecture of a global traceability solution delivered through Azure capabilities.
+* How the Azure Cosmos DB graph database helps you track intricate relationships between raw materials and finished goods in a global supply chain.
+* How Azure integration platform services such as Azure API Management and Event Hubs help you integrate diverse application ecosystems for supply chains.
+* How you can get help from Infosys to use this solution for your traceability needs.
## Overview
-In the food supply chain, product traceability is the ability to 'track and trace' them across the supply chain throughout the productΓÇÖs lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure. In the past, some organizations failed to track and trace products effectively in their supply chain, resulting in expensive recalls, fines, and consumer health issues. The traceability solutions had to address the needs of data harmonization, data ingestion at different velocity and veracity, and, more importantly, follow the inventory cycle, objectives that weren't possible with traditional platforms.
+In the food supply chain, traceability is the ability to *track and trace* a product across the supply chain throughout the product's lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure.
-Infosys's traceability solution, developed with Azure capabilities such as application services, integration services and database services, provides vital capabilities to:
+In the past, some organizations failed to track and trace products effectively in their supply chains. Results included expensive recalls, fines, and consumer health issues.
-* Connect to factories, warehouses/distribution centers.
-* Ingest/process parallel stock movement events.
-* A knowledge graph, which shows connections between raw material, batch, finish goods (FG) pallets, multi-level parent/child relationship of pallets, goods movement.
-* User portal with a search capability range of wildcard search to specific keyword search.
-* Identify impacts of a quality incident such as impacted raw material batch, pallets affected, location of the pallets.
-* Ability to have the history of events captured across multiple markets, including product recall information.
+Traceability solutions had to address the needs of data harmonization and data ingestion at various velocities and veracities. They also had to follow the inventory cycle. These objectives weren't possible with traditional platforms.
## Solution architecture
-Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. First, these systems need to ingest bursts of data from factory and warehouse management systems that cross geographies. Next, these systems process and analyze streaming data to derive complex relationships between raw material, production batches, finished good pallets and complex parent/child relationships (co-pack/repack). Then, the system must store information about the intricate relationships between raw material, finished goods, and pallets, all necessary for traceability. A user portal with search capability allows the users to track and trace products in the supply chain network. These services enable the end-to-end traceability solution that supports cloud-native, API-first, and data-driven capabilities.
+Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. Infosys developed an end-to-end traceability solution that uses Azure application services, integration services, and database services. The solution provides these capabilities:
-Microsoft Azure offers rich services that can be applied for traceability use cases, including Azure Cosmos DB, Azure Event Hubs, Azure API Management, Azure App Service, Azure SignalR, Azure Synapse Analytics, and Power BI.
+* Receive streaming data from factories, warehouses, and distribution centers across geographies.
+* Ingest and process parallel stock-movement events.
+* View a knowledge graph that analyzes relationships between raw materials, production batches, pallets of finished goods, multilevel parent/child relationships of pallets (copack/repack), and movement of goods.
+* Access to a user portal with a search capability that includes wildcards and specific keywords.
+* Identify impacts of a quality incident, such as affected raw materials, batches, pallets, and locations of pallets.
+* Capture the history of events across multiple markets, including product recall information.
-InfosysΓÇÖs traceability solution provides a pre-baked solution that you can use to improve track and trace capability. The following image explains the architecture used for this traceability solution:
+The Infosys traceability solution supports cloud-native, API-first, and data-driven capabilities. The following diagram illustrates the architecture of this solution:
-Different Azure services used in this architecture help with the following tasks:
+The architecture uses the following Azure services to help with specialized tasks:
-* Azure Cosmos DB allows you to scale performance up or down elastically. Gremlin API allows you to create and query complex relationships between raw material, finished goods and warehouses.
-* Azure API Management provides APIs for stock movement events to the 3PLs (thirdpParty Logistic Providers) and Warehouse Management Systems (WMS).
-* Azure Event Hub provides the ability to gather large numbers of concurrent events from WMS and 3PLs for further processing.
-* Azure Function apps processes events and ingest data to Azure Cosmos DB using Gremlin API.
-* Azure Search service allows users to do complex find, filter pallet information.
-* Azure Databricks reads change feed and creates models in Synapse Analytics for self-service reporting for users in Power BI.
-* Azure Web App and App Service plan allow you to deploy the user portal.
-* Azure Storage account stores archived data for long-term regulatory needs.
+* Azure Cosmos DB enables you to scale performance up or down elastically. By using the Gremlin API, you can create and query complex relationships between raw materials, finished goods, and warehouses.
+* Azure API Management provides APIs for stock movement events to third-party logistics (3PL) providers and warehouse management systems (WMSs).
+* Azure Event Hubs provides the ability to gather large numbers of concurrent events from 3PL providers and WMSs for further processing.
+* Azure Functions (through function apps) processes events and ingests data for Azure Cosmos DB by using the Gremlin API.
+* Azure Search enables complex searches and the filtering of pallet information.
+* Azure Databricks reads the change feed and creates models in Azure Synapse Analytics for self-service reporting for users in Power BI.
+* Azure App Service and its Web Apps feature enable the deployment of a user portal.
+* Azure Storage stores archived data for long-term regulatory needs.
-## Graph DB and its data design
+## Graph database and its data design
-The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model of our traceability graph allows storing such relationships starting from the receipt of raw material, manufacturing the finished goods in a factory, transferring to different warehouses during supply chain, and finally transferring to the customer warehouse. A high-level visualization of the process looks like the following image:
+The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model in the form of a traceability graph allows storing these relationships through all the steps in the supply chain. Here's a high-level visualization of the process:
-The above diagram shows a high level and simplified view of a complex supply chain process. However, getting the vital stock movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information.
+The preceding diagram is a simplified view of a complex process. However, getting stock-movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information:
-1. The traceability process starts when the supplier sends raw materials to the factories, and the initial nodes (vertices) of the graph and relationships (edges) gets created.
+1. The traceability process starts when the supplier sends raw materials to the factories. The solution creates the initial nodes (vertices) of the graph and relationships (edges).
-1. The finished goods (Items) are produced from raw materials and packed into pallets.
+1. The finished goods are produced from raw materials and packed into pallets.
-1. The pallets are then moved to factory warehouses or the market warehouses as per customer demands/orders.
+1. The pallets are moved to factory warehouses or market warehouses according to customer orders. The warehouses might be owned by the company or by 3PL providers.
-1. The warehouse could be of companyΓÇÖs owned or 3PL (third-party Logistic Providers). The pallets are then shipped to various other warehouses as per customer orders. As per the customer demands, child pallets or child-of-child pallets are created to accommodate the ordered quantity. Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes same item gets repacked to smaller or larger quantities to a different pallet as part of a customer order.
+1. The pallets are shipped to various other warehouses according to customer orders. Depending on customers' needs, child pallets or child-of-child pallets are created to accommodate the ordered quantity.
- :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in supply chain traceability solution" lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
+ Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes the same item is repacked to smaller or larger quantities in a different pallet as part of a customer order.
-1. Pallets then travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combine with other pallets to produce new pallets to fulfill customer orders.
+ :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in the solution for supply chain traceability." lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
-1. Eventually, the system creates a complex graph that holds vital relationship information for quality incident management, which we will discuss shortly.
+1. Pallets travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combined with other pallets to produce new pallets to fulfill customer orders.
- :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Supply chain object relationship complete architecture" lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
+1. Eventually, the system creates a complex graph that holds relationship information for quality incident management.
-1. These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. Graph and its traversals provide the required information for this. For example, if there is an issue with one raw material, the graph can show the impacted pallets, current location.
+ :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Diagram that shows the complete architecture for the supply chain object relationship." lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
+
+ These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. The graph and its traversals provide the required information for this. For example, if there's an issue with one raw material, the graph can show the affected pallets and the current location.
## Next steps
-* [Infosys traceability graph solution](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview)
-* [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure)
+* Learn about [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure).
* To visualize graph data, see the [Gremlin API visualization solutions](graph-visualization-partners.md). * To model your graph data, see the [Gremlin API modeling solutions](graph-modeling-tools.md).
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
+
+ Title: Configure role-based access control for your Azure Cosmos DB API for MongoDB database (preview)
+description: Learn how to configure native role-based access control in the API for MongoDB
+++ Last updated : 04/07/2022+++
+# Configure role-based access control for your Azure Cosmos DB API for MongoDB (preview)
+
+This article is about role-based access control for data plane operations in Azure Cosmos DB API for MongoDB, currently in public preview.
+
+If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
+
+The API for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users are roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or ARM for this preview feature.
+
+## Concepts
+
+### Resource
+A resource is a collection or database to which we are applying access control rules.
+
+### Privileges
+Privileges are actions that can be performed on a specific resource. For example, "read access to collection xyz". Privileges are assigned to a specific role.
+
+### Role
+A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database.
+
+### Diagnostic log auditing
+An additional column called `userId` has been added to the `MongoRequests` table in the Azure portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
+
+## Available Privileges
+#### Query and Write
+* find
+* insert
+* remove
+* update
+
+#### Change Streams
+* changeStream
+
+#### Database Management
+* createCollection
+* createIndex
+* dropCollection
+* killCursors
+* killAnyCursor
+
+#### Server Administration
+* dropDatabase
+* dropIndex
+* reIndex
+
+#### Diagnostics
+* collStats
+* dbStats
+* listDatabases
+* listCollections
+* listIndexes
+
+## Built-in Roles
+These roles already exist on every database and do not need to be created.
+
+### read
+Has the following privileges: changeStream, collStats, find, killCursors, listIndexes, listCollections
+
+### readwrite
+Has the following privileges: collStats, createCollection, dropCollection, createIndex, dropIndex, find, insert, killCursors, listIndexes, listCollections, remove, update
+
+### dbAdmin
+Has the following privileges: collStats, createCollection, createIndex, dbStats, dropCollection, dropDatabase, dropIndex, listCollections, listIndexes, reIndex
+
+### dbOwner
+Has the following privileges: collStats, createCollection, createIndex, dbStats, dropCollection, dropDatabase, dropIndex, listCollections, listIndexes, reIndex, find, insert, killCursors, listIndexes, listCollections, remove, update
+
+## Azure CLI Setup
+We recommend using the cmd when using Windows.
+
+1. Make sure you have latest CLI version(not extension) installed locally. try `az upgrade` command.
+2. Check if you have dev extension version already installed: `az extension show -n cosmosdb-preview`. If it shows your local version, remove it using the following command: `az extension remove -n cosmosdb-preview`. It may ask you to remove it from python virtual env. If that's the case, launch your local CLI extension python env and run `azdev extension remove cosmosdb-preview` (no -n here).
+3. List the available extensions and make sure the list shows the preview version and corresponding "Compatible" flag is true.
+4. Install the latest preview version: `az extension add -n cosmosdb-preview`.
+5. Check if the preview version is installed using this: `az extension list`.
+6. Connect to your subscription.
+```powershell
+az cloud set -n AzureCloud
+az login
+az account set --subscription <your subscription ID>
+```
+7. Enable the RBAC capability on your existing API for MongoDB database account.
+```powershell
+az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongoRoleBasedAccessControl
+```
+or create a new database account with the RBAC capability set to true. Your subscription must be allow-listed in order to create an account with the EnableMongoRoleBasedAccessControl capability.
+```powershell
+az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl
+```
+8. Create a database for users to connect to in the Azure portal.
+9. Create an RBAC user with built-in read role.
+```powershell
+az cosmosdb mongodb user definition create --account-name <YOUR_DB_ACCOUNT> --resource-group <YOUR_RG> --body {\"Id\":\"testdb.read\",\"UserName\":\"<YOUR_USERNAME>\",\"Password\":\"<YOUR_PASSWORD>\",\"DatabaseName\":\"<YOUR_DB_NAME>\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"<YOUR_DB_NAME>\"}]}
+```
++
+## Authenticate using pymongo
+Sending the appName parameter is required to authenticate as a user in the preview. Here is an example of how to do so:
+```python
+from pymongo import MongoClient
+client = MongoClient("mongodb://<YOUR_HOSTNAME>:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000", username="<YOUR_USER>", password="<YOUR_PASSWORD>", authSource='<YOUR_DATABASE>', authMechanism='SCRAM-SHA-256', appName="<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>")
+```
+
+## Azure CLI RBAC Commands
+The RBAC management commands will only work with a preview version of the Azure CLI installed. See the Quickstart above on how to get started.
+
+#### Create Role Definition
+```powershell
+az cosmosdb mongodb role definition create --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.My_Read_Only_Role101\",\"RoleName\":\"My_Read_Only_Role101\",\"Type\":\"CustomRole\",\"DatabaseName\":\"test\",\"Privileges\":[{\"Resource\":{\"Db\":\"test\",\"Collection\":\"test\"},\"Actions\":[\"insert\",\"find\"]}],\"Roles\":[]}
+```
+
+#### Create Role by passing JSON file body
+```powershell
+az cosmosdb mongodb role definition create --account-name <account-name> --resource-group <resource-group-name> --body role.json
+```
+
+#### Update Role Definition
+```powershell
+az cosmosdb mongodb role definition update --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.My_Read_Only_Role101\",\"RoleName\":\"My_Read_Only_Role101\",\"Type\":\"CustomRole\",\"DatabaseName\":\"test\",\"Privileges\":[{\"Resource\":{\"Db\":\"test\",\"Collection\":\"test\"},\"Actions\":[\"insert\",\"find\"]}],\"Roles\":[]}
+```
+
+#### Update role by passing JSON file body
+```powershell
+az cosmosdb mongodb role definition update --account-name <account-name> --resource-group <resource-group-name> --body role.json
+```
+
+#### List roles
+```powershell
+az cosmosdb mongodb role definition list --account-name <account-name> --resource-group <resource-group-name>
+```
+
+#### Check if role exists
+```powershell
+az cosmosdb mongodb role definition exists --account-name <account-name> --resource-group <resource-group-name> --id test.My_Read_Only_Role
+```
+
+#### Delete role
+```powershell
+az cosmosdb mongodb role definition delete --account-name <account-name> --resource-group <resource-group-name> --id test.My_Read_Only_Role
+```
+
+#### Create user definition
+```powershell
+az cosmosdb mongodb user definition create --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.myName\",\"UserName\":\"myName\",\"Password\":\"pass\",\"DatabaseName\":\"test\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"My_Read_Only_Role101\",\"Db\":\"test\"}]}
+```
+
+#### Create user by passing JSON file body
+```powershell
+az cosmosdb mongodb user definition create --account-name <account-name> --resource-group <resource-group-name> --body user.json
+```
+
+#### Update user definition
+To update the user's password, send the new password in the password field.
+
+```powershell
+az cosmosdb mongodb user definition update --account-name <account-name> --resource-group <resource-group-name> --body {\"Id\":\"test.myName\",\"UserName\":\"myName\",\"Password\":\"pass\",\"DatabaseName\":\"test\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"My_Read_Only_Role101\",\"Db\":\"test\"}]}
+```
+
+#### Update user by passing JSON file body
+```powershell
+az cosmosdb mongodb user definition update --account-name <account-name> --resource-group <resource-group-name> --body user.json
+```
+
+#### List users
+```powershell
+az cosmosdb mongodb user definition list --account-name <account-name> --resource-group <resource-group-name>
+```
+
+#### Check if user exists
+```powershell
+az cosmosdb mongodb user definition exists --account-name <account-name> --resource-group <resource-group-name> --id test.myName
+```
+
+#### Delete user
+```powershell
+az cosmosdb mongodb user definition delete --account-name <account-name> --resource-group <resource-group-name> --id test.myName
+```
+
+## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method
+
+In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
+
+### Using Azure Resource Manager templates
+
+When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
+
+```json
+"resources": [
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "disableLocalAuth": true,
+ },
+ },
+ ]
+```
+
+## Limitations
+
+- The number of users and roles you can create must equal less than 10,000.
+- The commands listCollections, listDatabases, killCursors are excluded from RBAC in the preview.
+- Backup/Restore is not supported in the preview.
+- [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is not supported in the preview.
+- Users and Roles across databases are not supported in the preview.
+- Users must connect with a tool that support the appName parameter in the preview. Mongo shell and many GUI tools are not supported in the preview. MongoDB drivers are supported.
+- A user's password can only be set/reset by through the Azure CLI / PowerShell in the preview.
+- Configuring Users and Roles is only supported through Azure CLI / PowerShell.
+
+## Frequently asked questions (FAQs)
+
+### Which Azure Cosmos DB APIs are supported by RBAC?
+
+The API for MongoDB (preview) and the SQL API.
+
+### Is it possible to manage role definitions and role assignments from the Azure portal?
+
+Azure portal support for role management is not available yet.
+
+### Is it possible to disable the usage of the account primary/secondary keys when using RBAC?
+
+Yes, see [Enforcing RBAC as the only authentication method](#disable-local-auth).
+
+### How do I change a user's password?
+
+Update the user definition with the new password.
+
+## Next steps
+
+- Get an overview of [secure access to data in Cosmos DB](../secure-access-to-data.md).
+- Learn more about [RBAC for Azure Cosmos DB management](../role-based-access-control.md).
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
Title: Azure role-based access control in Azure Cosmos DB
description: Learn how Azure Cosmos DB provides database protection with Active directory integration (Azure RBAC). Previously updated : 06/17/2021 Last updated : 04/06/2022
[!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] > [!NOTE]
-> Azure RBAC support in Azure Cosmos DB applies to management plane operations only. This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC. To learn more about role-based access control applied to data plane operations, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles.
+> This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC.
+
+To learn more about role-based access control applied to data plane operations in the SQL API, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles. For the Cosmos DB API for MongoDB, see [Data Plane RBAC in the API for MongoDB](mongodb/how-to-setup-rbac.md).
Azure Cosmos DB provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Cosmos DB. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Cosmos DB resources. Role assignments are scoped to control-plane access only, which includes access to Azure Cosmos accounts, databases, containers, and offers (throughput). + ## Built-in roles The following are the built-in roles supported by Azure Cosmos DB:
Update-AzCosmosDBAccount -ResourceGroupName [ResourceGroupName] -Name [CosmosDBA
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) - [Azure custom roles](../role-based-access-control/custom-roles.md) - [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)
+- [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Previously updated : 08/30/2021 Last updated : 04/06/2022 - # Secure access to data in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
Azure Cosmos DB RBAC is the ideal access control method in situations where:
See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
+For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md).
+ ## <a id="resource-tokens"></a> Resource tokens Resource tokens provide access to the application resources within a database. Resource tokens:
As a database service, Azure Cosmos DB enables you to search, select, modify and
- To learn more about Cosmos database security, see [Cosmos DB Database security](database-security.md). - To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources).-- User management samples with users and permissions, [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)
+- For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)
+- For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-pull-model.md
ms.devlang: csharp Previously updated : 08/02/2021 Last updated : 04/07/2022
FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(Ch
Here's an example for obtaining a `FeedIterator` that returns a `Stream`: ```csharp
-FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
``` If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time: ```csharp
-FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
while (iteratorForTheEntireContainer.HasMoreResults) {
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 03/07/2022 Last updated : 04/07/2022 ms.devlang: csharp
ms.devlang: csharp
> To learn about the Azure Cosmos DB .NET SDK v3, see the [Release notes](sql-api-sdk-dotnet-standard.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v3), .NET SDK v3 [Performance Tips](performance-tips-dotnet-sdk-v3-sql.md), and the [Troubleshooting guide](troubleshoot-dot-net-sdk.md). >
-This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for Core (SQL) API. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Cosmos namespace. You can use the information provided in this doc if you are migrating your application from any of the following Azure Cosmos DB .NET SDKs:
+This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for Core (SQL) API. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Cosmos namespace. You can use the information provided in this doc if you're migrating your application from any of the following Azure Cosmos DB .NET SDKs:
* Azure Cosmos DB .NET Framework SDK v2 for SQL API * Azure Cosmos DB .NET Core SDK v2 for SQL API
Most of the networking, retry logic, and lower levels of the SDK remain largely
## Why migrate to the .NET v3 SDK
-In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK will not be back ported to older versions.
+In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK won't be back ported to older versions.
The v2 SDK is currently in maintenance mode. For the best development experience, we recommend always starting with the latest supported version of SDK. ## Major name changes from v2 SDK to v3 SDK
The following classes have been replaced on the 3.0 SDK:
The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design. The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
-Because the .NET v3 SDK allows users to configure a custom serialization engine, there is no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
### Changes to item ID generation
The v3 SDK has built-in support for the Change Feed Processor APIs, allowing you
For more information, see [how to migrate from the change feed processor library to the Azure Cosmos DB .NET v3 SDK](how-to-migrate-from-change-feed-library.md)
+### Change feed queries
+
+Executing change feed queries on the v3 SDK is considered to be using the [change feed pull model](change-feed-pull-model.md). Follow this table to migrate configuration:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#using-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consuming-an-entire-containers-changes) easily now.|
+|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consuming-a-partition-keys-changes).|
+|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`ChangeFeedOptions.StartTime`|`ChangeFeedStartFrom.Time` |
+|`ChangeFeedOptions.StartFromBeginning` |`ChangeFeedStartFrom.Beginning` |
+|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#saving-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. |
+|Split handling|It's no longer required for users to handle split exceptions when reading the change feed, splits will be handled transparently without the need of user interaction.|
+ ### Using the bulk executor library directly from the V3 SDK The v3 SDK has built-in support for the bulk executor library, allowing you to use the same SDK for building your application and performing bulk operations. Previously, you were required to use a separate bulk executor library.
private static async Task DeleteItemAsync(DocumentClient client)
```
+### Change feed query
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task QueryChangeFeedAsync(Container container)
+{
+ FeedIterator<SalesOrder> iterator = container.GetChangeFeedIterator<SalesOrder>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+
+ string continuation = null;
+ while (iterator.HasMoreResults)
+ {
+ FeedResponse<SalesOrder> response = await iteratorForTheEntireContainer.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ // No new changes
+ continuation = response.ContinuationToken;
+ break;
+ }
+ else
+ {
+ // Process the documents in response
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task QueryChangeFeedAsync(DocumentClient client, string partitionKeyRangeId)
+{
+ ChangeFeedOptions options = new ChangeFeedOptions
+ {
+ PartitionKeyRangeId = partitionKeyRangeId,
+ StartFromBeginning = true,
+ };
+
+ using(var query = client.CreateDocumentChangeFeedQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName), options))
+ {
+ do
+ {
+ var response = await query.ExecuteNextAsync<Document>();
+ if (response.Count > 0)
+ {
+ var docs = new List<Document>();
+ docs.AddRange(response);
+ // Process the documents.
+ // Save response.ResponseContinuation if needed
+ }
+ }
+ while (query.HasMoreResults);
+ }
+}
+```
++ ## Next steps * [Build a Console app](sql-api-get-started.md) to manage Azure Cosmos DB SQL API data using the v3 SDK
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-nodejs.md
In this quickstart, you create an Azure Cosmos DB Table API account, and use Dat
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
- [Node.js 0.10.29+](https://nodejs.org/) . - [Git](https://git-scm.com/downloads).
-## Create a database account
+## Sample application
-> [!IMPORTANT]
-> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs.
->
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js). Both a starter and completed app are included in the sample repository.
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js
+```
-## Add a table
+The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
-## Add sample data
+## 1 - Create an Azure Cosmos DB account
+You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-## Clone the sample application
+### [Azure portal](#tab/azure-portal)
-Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create a Cosmos DB account.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
- ```bash
- md "C:\git-samples"
- ```
+### [Azure CLI](#tab/azure-cli)
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
- ```bash
- cd "C:\git-samples"
- ```
+Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
- ```bash
- git clone https://github.com/Azure-Samples/storage-table-node-getting-started.git
- ```
+It typically takes several minutes for the Cosmos DB account creation process to complete.
-> [!TIP]
-> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](how-to-use-nodejs.md) article.
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
-## Review the code
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
-* The following code shows how to create a table within the Azure Storage:
+### [Azure PowerShell](#tab/azure-powershell)
- ```javascript
- storageClient.createTableIfNotExists(tableName, function (error, createResult) {
- if (error) return callback(error);
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
- if (createResult.isSuccessful) {
- console.log("1. Create Table operation executed successfully for: ", tableName);
- }
- }
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
+++
+## 2 - Create a table
+
+Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-dotnet/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-dotnet/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-dotnet/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for a Cosmos DB table." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
+++
+## 3 - Get Cosmos DB connection string
+
+To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-dotnet/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-dotnet/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+
+The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
+++
+## 4 - Install the Azure Data Tables SDK for JS
+
+To access the Cosmos DB Table API from a nodejs application, install the [Azure Data Tables SDK](https://www.npmjs.com/package/@azure/data-tables) package.
+
+```bash
+ npm install @azure/data-tables
+```
+
+## 5 - Configure the Table client in env.js file
+
+Copy your Cosmos DB or Storage account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `configure/env.js` file.
- ```
+```js
+const env = {
+ connectionString:"A connection string to an Azure Storage or Cosmos account.",
+ tableName: "WeatherData",
+};
+```
-* The following code shows how to insert data into the table:
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableClient` class is the class used to communicate with the Cosmos DB Table API. An application will typically create a single `serviceClient` object per table to be used throughout the application.
- ```javascript
- var customer = createCustomerEntityDescriptor("Harp", "Walter", "Walter@contoso.com", "425-555-0101");
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
- storageClient.insertOrMergeEntity(tableName, customer, function (error, result, response) {
- if (error) return callback(error);
++
+## 6 - Implement Cosmos DB table operations
+
+All Cosmos DB table operations for the sample app are implemented in the `serviceClient` object located in `tableClient.js` file under *service* directory.
+
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
+
+### Get rows from a table
+
+The `serviceClient` object contains a method named `listEntities` which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+```js
+const allRowsEntities = serviceClient.listEntities();
+```
+
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the `listEntities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00' and RowKey le '2021-07-02 12:00'
+```
+
+You can view all OData filter operators on the OData website in the section Filter [System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
- console.log(" insertOrMergeEntity succeeded.");
+When request.args parameter is passed to the `listEntities` method in the `serviceClient` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `listEntities` method on the `serviceClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```js
+const filterEntities = async function (option) {
+ /*
+ You can query data according to existing fields
+ option provides some conditions to query,eg partitionKey, rowKeyDateTimeStart, rowKeyDateTimeEnd
+ minTemperature, maxTemperature, minPrecipitation, maxPrecipitation
+ */
+ const filterEntitiesArray = [];
+ const filters = [];
+ if (option.partitionKey) {
+ filters.push(`PartitionKey eq '${option.partitionKey}'`);
+ }
+ if (option.rowKeyDateTimeStart) {
+ filters.push(`RowKey ge '${option.rowKeyDateTimeStart}'`);
+ }
+ if (option.rowKeyDateTimeEnd) {
+ filters.push(`RowKey le '${option.rowKeyDateTimeEnd}'`);
+ }
+ if (option.minTemperature !== null) {
+ filters.push(`Temperature ge ${option.minTemperature}`);
+ }
+ if (option.maxTemperature !== null) {
+ filters.push(`Temperature le ${option.maxTemperature}`);
}
- ```
-
-* The following code shows how to query data from the table:
-
- ```javascript
- console.log("6. Retrieving entities with surname of Smith and first names > 1 and <= 75");
-
- var storageTableQuery = storage.TableQuery;
- var segmentSize = 10;
-
- // Demonstrate a partition range query whereby we are searching within a partition for a set of entities that are within a specific range.
- var tableQuery = new storageTableQuery()
- .top(segmentSize)
- .where('PartitionKey eq ?', lastName)
- .and('RowKey gt ?', "0001").and('RowKey le ?', "0075");
-
- runPageQuery(tableQuery, null, function (error, result) {
-
- if (error) return callback(error);
-
- ```
-
-* The following code shows how to delete data from the table:
-
- ```javascript
- storageClient.deleteEntity(tableName, customer, function entitiesQueried(error, result) {
- if (error) return callback(error);
-
- console.log(" deleteEntity succeeded.");
+ if (option.minPrecipitation !== null) {
+ filters.push(`Precipitation ge ${option.minPrecipitation}`);
}
- ```
-
-## Update your connection string
+ if (option.maxPrecipitation !== null) {
+ filters.push(`Precipitation le ${option.maxPrecipitation}`);
+ }
+ const res = serviceClient.listEntities({
+ queryOptions: {
+ filter: filters.join(" and "),
+ },
+ });
+ for await (const entity of res) {
+ filterEntitiesArray.push(entity);
+ }
+
+ return filterEntitiesArray;
+};
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `createEntity` method on the `serviceClient` object is used to insert data into the table.
-Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+Modify the `insertEntity` function in the example application to contain the following code.
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+```js
+const insertEntity = async function (entity) {
- :::image type="content" source="./media/create-table-nodejs/connection-string.png" alt-text="View and copy the required connection string information from the in the Connection String pane":::
+ await serviceClient.createEntity(entity);
-2. Copy the PRIMARY CONNECTION STRING using the copy button on the right side.
+};
+```
+### Upsert data using a TableEntity object
-3. Open the *app.config* file, and paste the value into the connectionString on line three.
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsertEntity` instead of the `createEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsertEntity` method will update the existing row. Otherwise, the row will be added to the table.
- > [!IMPORTANT]
- > If your Endpoint uses documents.azure.com, that means you have a preview account, and you need to create a [new Table API account](#create-a-database-account) to work with the generally available Table API SDK.
- >
+```js
+const upsertEntity = async function (entity) {
-3. Save the *app.config* file.
+ await serviceClient.upsertEntity(entity, "Merge");
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+};
+```
+### Insert or upsert data with variable properties
-## Run the app
+One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
-1. In the git terminal window, `cd` to the storage-table-java-getting-started folder.
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
- ```
- cd "C:\git-samples\storage-table-node-getting-started"
- ```
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a `TableEntity` object and use the `createEntity` or `upsertEntity` methods on the `serviceClient` object as appropriate.
-2. Run the following command to install the [azure], [node-uuid], [nconf] and [async] modules locally as well as to save an entry for them to the *package.json* file.
+In the sample application, the `upsertEntity` function can also implement the function of insert or upsert data with variable properties
- ```
- npm install azure-storage node-uuid async nconf --save
- ```
+```js
+const insertEntity = async function (entity) {
+ await serviceClient.createEntity(entity);
+};
-2. In the git terminal window, run the following commands to run the Node.js application.
+const upsertEntity = async function (entity) {
+ await serviceClient.upsertEntity(entity, "Merge");
+};
+```
+### Update an entity
- ```
- node ./tableSample.js
- ```
+Entities can be updated by calling the `updateEntity` method on the `serviceClient` object.
- The console window displays the table data being added to the new table database in Azure Cosmos DB.
+In the sample app, this object is passed to the `upsertEntity` method in the `serviceClient` object. It updates that entity object and uses the `upsertEntity` method save the updates to the database.
- You can now go back to Data Explorer and see, query, modify, and work with this new data.
+```js
+const updateEntity = async function (entity) {
+ await serviceClient.updateEntity(entity, "Replace");
+};
+```
-## Review SLAs in the Azure portal
+## 7 - Run the code
+Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
+ ## Clean up resources
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/create-table-dotnet/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/create-table-dotnet/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/create-table-dotnet/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
++ ## Next steps
cost-management-billing Buy Vm Software Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/buy-vm-software-reservation.md
+
+ Title: Prepay for Virtual machine software reservations
+description: Learn how to prepay for Azure virtual machine software reservations to save money.
+++++ Last updated : 03/17/2022+++
+# Prepay for Virtual machine software reservations (Azure Marketplace)
+
+When you prepay for your virtual machine software usage (available in the Azure Marketplace), you can save money over your pay-as-you-go costs. The discount is automatically applied to a deployed plan that matches the reservation, not on the virtual machine usage. You can buy reservations for virtual machines separately for more savings.
+
+You can buy virtual machine software reservation in the Azure portal. To buy a reservation:
+
+- You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.
+- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans.
+
+## Buy a virtual machine software reservation
+
+1. Select your desired plan from Azure Marketplace that has reservation pricing.
+2. Select **Purchase** and then select the Virtual machine software reservation that you want to buy.
+Any virtual machine software reservation that matches the attributes of what you buy gets a discount. The actual number of deployments that get the discount depend on the scope and quantity selected.
+3. Select a subscription. It's used to pay for the plan.
+The subscription payment method is charged the upfront costs for the reservation. To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
+ - For an enterprise subscription, these reservation purchase charges are not deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance. The charges are billed to the subscription's credit card or invoice payment method.
+ - For an individual subscription with pay-as-you-go pricing, the charges are billed to the subscription's credit card or invoice payment method.
+4. Select a scope. The scope can cover one subscription or multiple subscriptions (using a shared scope).
+ - Single subscription - The plan discount is applied to matching usage in the subscription.
+ - Shared - The plan discount is applied to matching instances in any subscription in your billing context. For enterprise customers, the billing context is the enrollment and includes all subscriptions in the enrollment. For individual plan with pay-as-you-go pricing customers, the billing context is all individual plans with pay-as-you-go pricing subscriptions created by the account administrator.
+ - Management group - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
+ - Single resource group - Applies the reservation discount to the matching resources in the selected resource group only.
+5. Select a product to choose the VM size and the image type. The discount will apply to matching resources and has instance size flexibility turned on.
+6. Select a one-year or three-year term.
+7. Choose a quantity, which is the number of prepaid VM instances that can get the billing discount.
+8. Add the product to the cart, review and purchase.
+
+The reservation discount is automatically applied to the software meter that you pre-pay for. VM compute charges aren't covered by the software reservation. You can purchase the VM reservations separately.
+
+## Discount applies to different VM sizes
+
+Like Reserved VM Instances, Virtual machine software reservation purchases offer instance size flexibility. So, your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the Virtual machine software reservation.
+
+## Self-service cancellation and exchanges
+
+You can't exchange a Virtual machine software reservation that you bought yourself. You can however, cancel the reservation within 72 hours of purchase. The [cancellation limit](exchange-and-refund-azure-reservations.md#cancel-exchange-and-refund-policies) applies.
+
+Check your usage before purchasing to make sure you buy the right software reservation.
+
+## Next steps
+
+- To learn more about Azure Reservations, see the following articles:
+ - [What are Azure Reservations?](save-compute-costs-reservations.md)
+ - [Understand how the Azure virtual machine software reservation discount is applied](understand-vm-software-reservation-discount.md)
+ - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Synapse Analytics - data warehouse](prepay-sql-data-warehouse-charges.md) - [Synapse Analytics - Pre-purchase](synapse-analytics-pre-purchase-plan.md) - [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Virtual machine software](buy-vm-software-reservation.md)
## Buy reservations with monthly payments
cost-management-billing Understand Vm Software Reservation Discount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-vm-software-reservation-discount.md
+
+ Title: Understand how the Azure virtual machine software reservation discount is applied
+description: Learn how Azure virtual machine software reservation discount is applied before you buy.
+++++ Last updated : 03/09/2022+++
+# Understand how the virtual machine software reservation (Azure Marketplace) discount is applied
+
+After you buy a virtual machine software reservation, the discount is automatically applied to deployed plan that matches the reservation. A software reservation only covers the cost of running the software plan you chose on an Azure VM.
+
+To buy the right virtual machine software reservation, you need to understand the software plan you want to run and the number of vCPUs on those VMs.
+
+## Discount applies to different VM sizes
+
+Like Reserved VM Instances, virtual machine software reservation purchases offer instance size flexibility. So, your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the virtual machine software reservation.
+
+For example, if you buy a virtual machine software reservation for a VM with one vCPU, the ratio for that reservation is 1 and using a two vCPU machine. It covers 50% of the cost if the ratio is 1:2. It's based on how the software plan was configured by the publisher.
+
+## Prepay for virtual machine software reservations
+
+When you prepay for your virtual machine software usage (available in the Azure Marketplace), you can save money over your pay-as-you-go costs. The discount is automatically applied to a deployed plan that matches the reservation, not on the virtual machine usage. You can buy reservations for virtual machines separately for more savings.
+
+You can buy a virtual machine software reservation in the Azure portal. To buy a reservation:
+
+- You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.
+- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+To learn more about Azure Reservations, see the following articles:
+
+- [What are reservations for Azure?](save-compute-costs-reservations.md)
+- [Prepay for Azure virtual machine software reservations](buy-vm-software-reservation.md)
+- [Prepay for Virtual Machines with Azure Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)
+- [Prepay for SQL Database compute resources with Azure SQL Database reserved capacity](../../azure-sql/database/reserved-capacity-overview.md)
+- [Manage reservations for Azure](manage-reserved-vm-instance.md)
+- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
+- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Data Factory offers three types of Integration Runtime (IR), and you should choo
The following table describes the capabilities and network support for each of the integration runtime types:
-IR type | Public network | Private network
-- | -- |
-Azure | Data Flow<br/>Data movement<br/>Activity dispatch | Data Flow<br/>Data movement<br/>Activity dispatch
-Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity dispatch
-Azure-SSIS | SSIS package execution | SSIS package execution
+IR type | Public Network Support | Private Link Support |
+- | -- | |
+Azure | Data Flow<br/>Data movement<br/>Activity dispatch | Data Flow<br/>Data movement<br/>Activity dispatch |
+Self-hosted | Data movement<br/>Activity dispatch | Data movement<br/>Activity dispatch |
+Azure-SSIS | SSIS package execution | SSIS package execution |
+
+> [!NOTE]
+> Outbound controls vary by service for Azure IR. In Synapse, workspaces have options to limit outbound traffic from the [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md) when utilizing Azure IR. In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network) when utilizing Azure IR. Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
## Azure integration runtime
An Azure integration runtime can:
### Azure IR network environment
-Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment.
+Azure Integration Runtime supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment. In Synapse, workspaces have options to limit outbound traffic from the IR [managed virtual network](../synapse-analytics/security/synapse-workspace-managed-vnet.md). In Data Factory, all ports are opened for [outbound communications](managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network). The Azure-SSIS IR can be integrated with your vNET to provide [outbound communications](azure-ssis-integration-runtime-standard-virtual-network-injection.md) controls.
### Azure IR compute resource and scaling Azure integration runtime provides a fully managed, serverless compute in Azure. You don't have to worry about infrastructure provision, software installation, patching, or capacity scaling. In addition, you only pay for the duration of the actual utilization.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
To copy data to Azure SQL Database, the following properties are supported in th
| disableMetricsCollection | The service collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | WriteBehavior | Specify the write behavior for copy activity to load data into Azure SQL Database. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No |
-| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upert`. | No |
+| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upsert`. | No |
| ***Under `upsertSettings`:*** | | | | useTempDB | Specify whether to use the a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No | | interimSchemaName | Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. <br/> Apply when the useTempDB option is `False`. | No |
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md
Previously updated : 10/13/2021 Last updated : 04/07/2022 # Copy data from an OData source by using Azure Data Factory or Synapse Analytics
> * [Version 1](v1/data-factory-odata-connector.md) > * [Current version](connector-odata.md)
-This article outlines how to use Copy Activity in a Azure Data Factory or Synapse Analytics pipeline to copy data from an OData source. The article builds on [Copy Activity](copy-activity-overview.md), which presents a general overview of Copy Activity.
+This article outlines how to use Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from an OData source. The article builds on [Copy Activity](copy-activity-overview.md), which presents a general overview of Copy Activity.
## Supported capabilities
You can copy data from an OData source to any supported sink data store. For a l
Specifically, this OData connector supports: - OData version 3.0 and 4.0.-- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Windows**, and **AAD service principal**.
+- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Windows**, and **Azure Active Directory service principal**.
## Prerequisites
Specifically, this OData connector supports:
Use the following steps to create a linked service to an OData store in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
The following properties are supported for an OData linked service:
| servicePrincipalEmbeddedCert | Specify the base64 encoded certificate of your application registered in Azure Active Directory. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No | | servicePrincipalEmbeddedCertPassword | Specify the password of your certificate if your certificate is secured with a password. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No| | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the top-right corner of the Azure portal. | No |
-| aadResourceId | Specify the AAD resource you are requesting for authorization.| No |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your AAD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No |
+| aadResourceId | Specify the Azure AD resource you are requesting for authorization.| No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure Integration Runtime is used. |No | **Example 1: Using Anonymous authentication**
Project Online requires user-based OAuth, which is not supported by Azure Data F
- **Callback URL**: Enter `https://www.localhost.com/`.  - **Auth URL**: Enter `https://login.microsoftonline.com/common/oauth2/authorize?resource=https://<your tenant name>.sharepoint.com`. Replace `<your tenant name>` with your own tenant name. - **Access Token URL**: Enter `https://login.microsoftonline.com/common/oauth2/token`.
- - **Client ID**: Enter your AAD service principal ID.
+ - **Client ID**: Enter your Azure Active Directory service principal ID.
- **Client Secret**: Enter your service principal secret. - **Client Authentication**: Select **Send as Basic Auth header**.
- 1. You will be asked to login with your username and password.
+ 1. You will be asked to sign in with your username and password.
1. Once you get your access token, please copy and save it for the next step. :::image type="content" source="./media/connector-odata/odata-project-online-postman-access-token-inline.png" alt-text="Screenshot of using Postman to get the access token." lightbox="./media/connector-odata/odata-project-online-postman-access-token-expanded.png":::
To learn details about the properties, check [Lookup activity](control-flow-look
## Next steps
-For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Here are details of the application's actions and arguments:
9. Get the authentication key by using PowerShell. Here's a PowerShell example for retrieving the authentication key: ```powershell
- Get-AzDataFactoryV2IntegrationRuntimeKey -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $selfHostedIntegrationRuntime
+ Get-AzDataFactoryV2IntegrationRuntimeKey -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $selfHostedIntegrationRuntimeName
``` 10. On the **Register Integration Runtime (Self-hosted)** window of Microsoft Integration Runtime Configuration Manager running on your machine, take the following steps:
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Last updated 03/18/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-xxx-md.md)]
-By using Azure Private Link, you can connect to various platforms as a service (PaaS) deployments in Azure via a private endpoint. A private endpoint is a private IP address within a specific virtual network and subnet. For a list of PaaS deployments that support Private Link functionality, see [Private Link documentation](../private-link/index.yml).
+By using Azure private link, you can connect to various platforms as a service (PaaS) deployments in Azure via a private endpoint. A private endpoint is a private IP address within a specific virtual network and subnet. For a list of PaaS deployments that support private link functionality, see [Private Link documentation](../private-link/index.yml).
## Secure communication between customer networks and Azure Data Factory You can set up an Azure virtual network as a logical representation of your network in the cloud. Doing so provides the following benefits: * You help protect your Azure resources from attacks in public networks.
-* You let the networks and Data Factory securely communicate with each other.
+* You let the networks and data factory securely communicate with each other.
You can also connect an on-premises network to your virtual network by setting up an Internet Protocol security (IPsec) VPN (site-to-site) connection or an Azure ExpressRoute (private peering) connection.
You can also install a self-hosted integration runtime on an on-premises machine
* Run copy activities between a cloud data store and a data store in a private network. * Dispatch transform activities against compute resources in an on-premises network or an Azure virtual network.
-Several communication channels are required between Azure Data Factory and the customer virtual network, as shown in the following table:
+Several communication channels are required between Azure data factory and the customer virtual network, as shown in the following table:
| Domain | Port | Description | | - | -- | |
-| `adf.azure.com` | 443 | A control plane, required by Data Factory authoring and monitoring. |
+| `adf.azure.com` | 443 | Azure data factory portal, required by data factory authoring and monitoring. |
| `*.{region}.datafactory.azure.net` | 443 | Required by the self-hosted integration runtime to connect to the Data Factory service. | | `*.servicebus.windows.net` | 443 | Required by the self-hosted integration runtime for interactive authoring. | | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. |
-With the support of Private Link for Azure Data Factory, you can:
-* Create a private endpoint in your virtual network.
-* Enable the private connection to a specific data factory instance.
+> [!NOTE]
+> Disabling public network access is applicable only to the self-hosted integration runtime, not to Azure Integration Runtime and SQL Server Integration Services (SSIS) Integration Runtime.
-The communications to Azure Data Factory service go through Private Link and help provide secure private connectivity.
+The communications to Azure data factory service go through private link and help provide secure private connectivity.
:::image type="content" source="./media/data-factory-private-link/private-link-architecture.png" alt-text="Diagram of Private Link for Azure Data Factory architecture.":::
-Enabling the Private Link service for each of the preceding communication channels offers the following functionality:
+Enabling the private link service for each of the preceding communication channels offers the following functionality:
- **Supported**:
- - You can author and monitor the data factory in your virtual network, even if you block all outbound communications.
- - The command communications between the self-hosted integration runtime and the Azure Data Factory service can be performed securely in a private network environment. The traffic between the self-hosted integration runtime and the Azure Data Factory service goes through Private Link.
+ - You can author and monitor in the data factory portal from your virtual network, even if you block all outbound communications. It should be noted that even if you create a private endpoint for the portal, others can still access the Azure data factory portal through the public network.
+ - The command communications between the self-hosted integration runtime and the Azure data factory service can be performed securely in a private network environment. The traffic between the self-hosted integration runtime and the Azure data factory service goes through private link.
- **Not currently supported**: - Interactive authoring that uses a self-hosted integration runtime, such as test connection, browse folder list and table list, get schema, and preview data, goes through Private Link. - The new version of the self-hosted integration runtime which can be automatically downloaded from Microsoft Download Center if you enable Auto-Update , is not supported at this time .-
+
> [!NOTE] > For functionality that's not currently supported, you still need to configure the previously mentioned domain and port in the virtual network or your corporate firewall. > [!NOTE]
- > Connecting to Azure Data Factory via private endpoint is only applicable to self-hosted integration runtime in data factory. It is not supported for Azure Synapse.
+ > Connecting to Azure data factory via private endpoint is only applicable to self-hosted integration runtime in data factory. It is not supported for Azure Synapse.
> [!WARNING]
-> If you enable Private Link in Azure Data Factory and block public access at the same time, it is reccomended that you store your credentials in an Azure key vault to ensure they are secure.
+> If you enable private link in Azure data factory and block public access at the same time, it is recommended that you store your credentials in an Azure key vault to ensure they are secure.
+
+## Steps to configure private endpoint for communication between self-hosted integration runtime and Azure data factory
+This section will detail how to configure the private endpoint for communication between self-hosted integration runtime and Azure data factory.
+
+**Step 1: Create a private endpoint and set up a private link for Azure data factory.**
+The private endpoint is created in your virtual network for the communication between self-hosted integration runtime and Azure data factory service. Please follow the details step in [Set up a private endpoint link for Azure Data Factory](#set-up-a-private-endpoint-link-for-azure-data-factory)
+
+**Step 2: Make sure the DNS configuration is correct.**
+Please follow the instructions [DNS changes for private endpoints](#dns-changes-for-private-endpoints) to check or configure your DNS settings.
+
+**Step 3: Put FQDNs of Azure Relay and download center into the allow list of your firewall.**
+If your self-hosted integration runtime is installed on the virtual machine in your virtual network, please allow outbound traffic to below FQDNs in the NSG of your virtual network.
+
+If your self-hosted integration runtime is installed on the machine in your on-premises environment, please allow outbound traffic to below FQDNs in the firewall of your on-premises environment and NSG of your virtual network.
+
+| Domain | Port | Description |
+| - | -- | |
+| `*.servicebus.windows.net` | 443 | Required by the self-hosted integration runtime for interactive authoring. |
+| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. |
+
+> [!NOTE]
+> If you donΓÇÖt allow above outbound traffic in the firewall and NSG, self-hosted integration runtime is shown as limited status. But you can still use it to execute activities. Only interactive authoring and auto-update donΓÇÖt work.
## DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the Data Factory is updated to an alias in a subdomain with the prefix 'privatelink'. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the 'privatelink' subdomain, with the DNS A resource records for the private endpoints.
+When you create a private endpoint, the DNS CNAME resource record for the data factory is updated to an alias in a subdomain with the prefix 'privatelink'. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the 'privatelink' subdomain, with the DNS A resource records for the private endpoints.
-When you resolve the data factory endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the data factory service. When resolved from the VNet hosting the private endpoint, the storage endpoint URL resolves to the private endpoint's IP address.
+When you resolve the data factory endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the data factory service. When resolved from the virtual network hosting the private endpoint, the storage endpoint URL resolves to the private endpoint's IP address.
-For the illustrated example above, the DNS resource records for the Data Factory 'DataFactoryA', when resolved from outside the VNet hosting the private endpoint, will be:
+For the illustrated example above, the DNS resource records for the data factory 'DataFactoryA', when resolved from outside the virtual network hosting the private endpoint, will be:
| Name | Type | Value | | - | -- | |
For the illustrated example above, the DNS resource records for the Data Factory
| DataFactoryA.{region}.datafactory.azure.net | CNAME | < data factory service public endpoint > | | < data factory service public endpoint > | A | < data factory service public IP address > |
-The DNS resource records for DataFactoryA, when resolved in the VNet hosting the private endpoint, will be:
+The DNS resource records for DataFactoryA, when resolved in the virtual network hosting the private endpoint, will be:
| Name | Type | Value | | - | -- | | | DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.privatelink.datafactory.azure.net | | DataFactoryA.{region}.privatelink.datafactory.azure.net | A | < private endpoint IP address > |
-If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Data Factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
- > [!NOTE]
- > There is currently only one Azure Data Factory Portal endpoint and therefore only one private endpoint for portal in a DNS zone. Attempting to create a second or subsequent portal private endpoint will overwrite the previously created private DNS entry for portal.
+If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the data factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
- [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) - [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+ > [!NOTE]
+ > There is currently only one Azure data factory portal endpoint and therefore only one private endpoint for portal in a DNS zone. Attempting to create a second or subsequent portal private endpoint will overwrite the previously created private DNS entry for portal.
+ ## Set up a private endpoint link for Azure Data Factory
Finally, you must create the private endpoint in your data factory.
8. Select **Create**.
-> [!NOTE]
-> Disabling public network access is applicable only to the self-hosted integration runtime, not to Azure Integration Runtime and SQL Server Integration Services (SSIS) Integration Runtime.
-> [!NOTE]
-> You can still access the Azure Data Factory portal through a public network after you create private endpoint for the portal.
+## Restrict access for data factory resources using private link
+If you want to restrict access for data factory resources in your subscriptions by private link, please follow [Use portal to create private link for managing Azure resources](https://docs.microsoft.com/azure/azure-resource-manager/management/create-private-link-access-portal?source=docs)
+
+## Known issue
+You are unable to access each other PaaS Resources when both sides are exposed to private Link and private endpoint. This is a known limitation of private link and private endpoint.
+For example, if A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesnΓÇÖt block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then he canΓÇÖt access data factory A via public in virtual network B anymore.
+ ## Next steps
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md
The **Split on** setting determines whether the row of data flows to the first m
Use the data flow expression builder to enter an expression for the split condition. To add a new condition, click on the plus icon in an existing row. A default stream can be added as well for rows that don't match any condition. ## Data flow script
The below example is a conditional split transformation named `SplitByYear` that
In the service UI, this transformation looks like the below image: The data flow script for this transformation is in the snippet below:
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-email.md
For the **Send Email (V2)** action, customize how you wish to format the email,
:::image type="content" source="media/how-to-send-email/logic-app-email-action.png" alt-text="Shows the Logic App workflow designer for the Send Email (V2) action.":::
-Save the workflow. Browse to the Overview page for the workflow. Make a note of the Workflow URL for your new workflow then:
+Save the workflow. Browse to the Overview page for the workflow. Make a note of the Workflow URL for your new workflow then, highlighted in the image below:
:::image type="content" source="media/how-to-send-email/logic-app-workflow-url.png" alt-text="Shows the Logic App workflow Overview tab with the Workflow URL highlighted.":::
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Last updated 08/10/2021
-# Push Data Factory lineage data to Azure Purview (Preview)
+# Push Data Factory lineage data to Azure Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
On the activity asset, click the Lineage tab, you can see all the lineage inform
[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
-[Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
+[Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Previously updated : 02/15/2022 Last updated : 04/05/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
In this tutorial, you learn how to:
## Prerequisites
-Before you set up a compute role on your Azure Stack Edge Pro device, make sure that:
+Before you set up a compute role on your Azure Stack Edge Pro device:
-- You've activated your Azure Stack Edge Pro device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
+- Make sure that you've activated your Azure Stack Edge Pro device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and: - Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
+ > [!NOTE]
+ > If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
+ ## Configure compute [!INCLUDE [configure-compute](../../includes/azure-stack-edge-gateway-configure-compute.md)]
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 03/03/2022 Last updated : 04/06/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to enable compute on a virtual switch and configure virtual n
> [!IMPORTANT] > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+ > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
databox-online Azure Stack Edge Gpu Troubleshoot Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-iot-edge.md
Previously updated : 06/19/2021 Last updated : 04/06/2022
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Title: Microsoft Defender for container registries - the benefits and features description: Learn about the benefits and features of Microsoft Defender for container registries. Previously updated : 12/08/2021 Last updated : 04/07/2022 --++ # Introduction to Microsoft Defender for container registries (deprecated)
Yes. If you have an organizational need to ignore a finding, rather than remedia
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
-## Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?
-Yes.
- ## Next steps > [!div class="nextstepaction"]
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
Title: How to use Defender for Containers description: Learn how to use Defender for Containers to scan Linux images in your Linux-hosted registries Previously updated : 03/07/2022 Last updated : 04/07/2022 --++ # Use Defender for Containers to scan your ACR images for vulnerabilities
To create a rule:
:::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule."::: 1. To view or delete the rule, select the ellipsis menu ("...").
+## FAQ
+
+### How does Defender for Cloud scan an image?
+Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+
+Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+
+### Can I get the scan results via REST API?
+Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+### What registry types are scanned? What types are billed?
+For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](defender-for-container-registries-introduction.md#availability).
+
+If you connect unsupported registries to your Azure subscription, Defender for Cloud won't scan them and won't bill you for them.
+
+### Can I customize the findings from the vulnerability scanner?
+Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-container-registries-usage.md#disable-specific-findings).
+
+### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
+Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
+ ## Next steps Learn more about the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers -- Previously updated : 03/15/2022++ Last updated : 04/07/2022 # Overview of Microsoft Defender for Containers
The following describes the components necessary in order to receive the full pr
### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
+### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?
+Yes.
+ ### Does Microsoft Defender for Containers support AKS without scale set (default) ? No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
To protect machines in hybrid and multi-cloud environments, Defender for Cloud u
Microsoft Defender for Servers provides threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, or on-premises. Microsoft Defender for Servers is available in two plans: -- **Microsoft Defender for Servers Plan 1** - deploys Microsoft Defender for Endpoint to your servers with these additional capabilities:
+- **Microsoft Defender for Servers Plan 1** - deploys Microsoft Defender for Endpoint to your servers and provides these capabilities:
- Microsoft Defender for Endpoint licenses are charged per hour instead of per seat, lowering costs for protecting virtual machines only when they are in use.
- - Microsoft Defender for Endpoint is deployed automatically to all cloud workloads so that you know they are protected when they spin up.
+ - Microsoft Defender for Endpoint deploys automatically to all cloud workloads so that you know they're protected when they spin up.
- Alerts and vulnerability data from Microsoft Defender for Endpoint is shown in Microsoft Defender for Cloud - **Microsoft Defender for Servers Plan 2** (formerly Defender for Servers) - includes the benefits of Plan 1 and support for all of the other Microsoft Defender for Servers features.
For pricing details in your currency of choice and according to your region, see
To enable the Microsoft Defender for Servers plans: 1. Go to **Environment settings** and select your subscription.
-2. If Microsoft Defender for Servers is not enabled, set it to **On**.
+2. If Microsoft Defender for Servers isn't enabled, set it to **On**.
Plan 2 is selected by default.
- If you want to change the Defender for server plan:
- 1. In the **Plan/Pricing** column, click **configure**.
- 2. Select the plan that you want.
+ If you want to change the Defender for Servers plan:
+ 1. In the **Plan/Pricing** column, select **Change plan**.
+ 2. Select the plan that you want and select **Confirm**.
The following table describes what's included in each plan at a high level.
The following table describes what's included in each plan at a high level.
| Microsoft threat and vulnerability management | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Log-analytics (500MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Log-analytics (500 MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| Security Policy & Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Vulnerability Assessment using Qualys | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
The following table describes what's included in each plan at a high level.
The threat detection and protection capabilities provided with Microsoft Defender for Servers include: -- **Integrated license for Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. When you enable Microsoft Defender for Servers, you give consent for Defender for Cloud to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
+- **Integrated license for Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. When you enable Microsoft Defender for Servers, Defender for Cloud gets access to the Microsoft Defender for Endpoint data that is related to vulnerabilities, installed software, and alerts for your endpoints.
When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. For more information, see [Protect your endpoints](integration-defender-for-endpoint.md). -- **Vulnerability assessment tools for machines** - Microsoft Defender for Servers includes a choice of vulnerability discovery and management tools for your machines. From Defender for Cloud's settings pages, you can select which of these tools to deploy to your machines and the discovered vulnerabilities will be shown in a security recommendation.
+- **Vulnerability assessment tools for machines** - Microsoft Defender for Servers includes a choice of vulnerability discovery and management tools for your machines. From Defender for Cloud's settings pages, you can select the tools to deploy to your machines. The discovered vulnerabilities are shown in a security recommendation.
- - **Microsoft threat and vulnerability management** - Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, and without the need of additional agents or periodic scans. [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) prioritizes vulnerabilities based on the threat landscape, detections in your organization, sensitive information on vulnerable devices, and business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
+ - **Microsoft threat and vulnerability management** - Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, and without the need of other agents or periodic scans. [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) prioritizes vulnerabilities according to the threat landscape, detections in your organization, sensitive information on vulnerable devices, and the business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
- **Vulnerability scanner powered by Qualys** - The Qualys scanner is one of the leading tools for real-time identification of vulnerabilities in your Azure and hybrid virtual machines. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. Learn more in [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md). - **Just-in-time (JIT) virtual machine (VM) access** - Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.
- When you enable Microsoft Defender for Servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. For more information, see [Understanding JIT VM access](just-in-time-access-overview.md).
+ When you enable Microsoft Defender for Servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs. This reduces exposure to attacks and provides easy access to connect to VMs when needed. For more information, see [Understanding JIT VM access](just-in-time-access-overview.md).
- **File integrity monitoring (FIM)** - File integrity monitoring (FIM), also known as change monitoring, examines files and registries of operating system, application software, and others for changes that might indicate an attack. A comparison method is used to determine if the current state of the file is different from the last scan of the file. You can use this comparison to determine if valid or suspicious modifications have been made to your files.
The threat detection and protection capabilities provided with Microsoft Defende
- **Adaptive application controls (AAC)** - Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
- When you've enabled and configured adaptive application controls, you'll get security alerts if any application runs other than the ones you've defined as safe. For more information, see [Use adaptive application controls to reduce your machines' attack surfaces](adaptive-application-controls.md).
+ After you enable and configure adaptive application controls, you get security alerts if any application runs other than the ones you defined as safe. For more information, see [Use adaptive application controls to reduce your machines' attack surfaces](adaptive-application-controls.md).
- **Adaptive network hardening (ANH)** - Applying network security groups (NSG) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. In these cases, further improving the security posture can be achieved by hardening the NSG rules, based on the actual traffic patterns.
- Adaptive Network Hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise, and then provides recommendations to allow traffic only from specific IP/port tuples. For more information, see [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
+ Adaptive Network Hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise. ANH then provides recommendations to allow traffic only from specific IP and port tuples. For more information, see [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
- **Docker host hardening** - Microsoft Defender for Cloud identifies unmanaged containers hosted on IaaS Linux VMs, or other Linux machines running Docker containers. Defender for Cloud continuously assesses the configurations of these containers. It then compares them with the Center for Internet Security (CIS) Docker Benchmark. Defender for Cloud includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. For more information, see [Harden your Docker hosts](harden-docker-hosts.md).
The threat detection and protection capabilities provided with Microsoft Defende
- Well-known toolkits and crypto mining software
- - Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.
+ - Shellcode - a small piece of code typically used as the payload in the exploitation of a software vulnerability.
- Injected malicious executable in process memory
The threat detection and protection capabilities provided with Microsoft Defende
- **Linux auditd alerts and Log Analytics agent integration (Linux only)** - The auditd system consists of a kernel-level subsystem, which is responsible for monitoring system calls. It filters them by a specified rule set, and writes messages for them to a socket. Defender for Cloud integrates functionalities from the auditd package within the Log Analytics agent. This integration enables collection of auditd events in all supported Linux distributions, without any prerequisites.
- Log Analytics agent for Linux collects auditd records and enriches and aggregates them into events. Defender for Cloud continuously adds new analytics that use Linux signals to detect malicious behaviors on cloud and on-premises Linux machines. Similar to Windows capabilities, these analytics span across suspicious processes, dubious sign-in attempts, kernel module loading, and other activities. These activities can indicate a machine is either under attack or has been breached.
+ Log Analytics agent for Linux collects auditd records and enriches and aggregates them into events. Defender for Cloud continuously adds new analytics that use Linux signals to detect malicious behaviors on cloud and on-premises Linux machines. Similar to Windows capabilities, these analytics include tests that check for suspicious processes, dubious sign-in attempts, kernel module loading, and other activities. These activities can indicate a machine is either under attack or has been breached.
For a list of the Linux alerts, see the [Reference table of alerts](alerts-reference.md#alerts-linux).
In this article, you learned about Microsoft Defender for Servers.
For related material, see the following page: -- Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from a different security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+- Whether Defender for Cloud generates an alert or receives an alert from a different security product, you can export alerts from Defender for Cloud. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Azure Kubernetes Service (AKS) Clusters without [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Azure Kubernetes Service (AKS) Clusters without [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> |
-<sup><a name="footnote1"></a>1</sup>The AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br>
-<sup><a name="footnote2"></a>2</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
-<sup><a name="footnote3"></a>3</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote1"></a>1</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event.
- ### Alert news
+### Alert news
-New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the Support page of the sensor console. Alerts tht can be re-enabled are marked with an asterisk (*) in the tables below.
+New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the Support page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant. See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
+## Supported alert types
+
+| Alert type | Description |
+|-|-|
+| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
+| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
+| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
+| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
+| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
++ ## Policy engine alerts Policy engine alerts describe detected deviations from learned baseline behavior.
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | |--|--|--| | Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Database Login Failed | A failed sign in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | | Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major | | Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| FTP Login Failed | A failed sign in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | | GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | | Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
Policy engine alerts describe detected deviations from learned baseline behavior
| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | | Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Database Login | A sign in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
+| Unauthorized Database Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
Policy engine alerts describe detected deviations from learned baseline behavior
| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SMB Login | A sign in attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized SMB Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
Anomaly engine alerts describe detected anomalies in network activity.
| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical | | ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical | | ARP Spoofing | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
-| Excessive Login Attempts | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Number of Sessions | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Excessive SMB login attempts | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
| ICMP Flooding | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | |* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical | | Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually seen. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning | | Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | | Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | | Unexpected message length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Title: Microsoft Defender for IoT sensor connection methods
+ Title: OT sensor cloud connection methods - Microsoft Defender for IoT
description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Last updated 03/08/2022
-# Sensor connection methods
+# OT sensor cloud connection methods
-This article describes the architectures and connection methods supported for connecting your sensors to Microsoft Defender for IoT in the Azure portal.
+This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud.
-All supported connection methods provide:
+All supported cloud connection methods provide:
-- **Simple deployment**, requiring no additional installations in your private Azure environment, such as for an IoT Hub
+- **Simple deployment**, requiring no extra installations in your private Azure environment, such as for an IoT Hub
- **Improved security**, without needing to configure or lock down any resource security settings in the Azure VNET
The following image shows how you can connect your sensors to the Defender for I
:::image type="content" source="media/architecture-connections/proxy-chaining.png" alt-text="Diagram of a proxy connection using proxy chaining." border="false":::
-This method supports connecting your sensors without direct internet access, using an SSL-encrypted tunnel to transfer data from the sensor to the service endpoint via proxy servers. The proxy server does not perform any data inspection, analysis, or caching.
+This method supports connecting your sensors without direct internet access, using an SSL-encrypted tunnel to transfer data from the sensor to the service endpoint via proxy servers. The proxy server doesn't perform any data inspection, analysis, or caching.
-With a proxy chaining method, Defender for IoT does not support your proxy service. It's the customer's responsibility to set up and maintain the proxy service.
+With a proxy chaining method, Defender for IoT doesn't support your proxy service. It's the customer's responsibility to set up and maintain the proxy service.
For more information, see [Connect via proxy chaining](connect-sensors.md#connect-via-proxy-chaining).
For more information, see [Connect via multi-cloud vendors](connect-sensors.md#c
## Working with a mixture of sensor software versions
-If you are a customer with an existing production deployment, we recommend that upgrade any legacy sensor versions to version 22.1.x.
+If you're a customer with an existing production deployment, we recommend that upgrade any legacy sensor versions to version 22.1.x.
While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Title: What is agentless solution architecture
-description: Learn about Microsoft Defender for IoT agentless architecture and information flow.
+ Title: System architecture for OT monitoring - Microsoft Defender for IoT
+description: Learn about the Microsoft Defender for IoT system architecture and data flow.
Previously updated : 02/06/2022 Last updated : 03/24/2022
-# Microsoft Defender for IoT architecture
+# System architecture for OT system monitoring
-This article describes the functional system architecture of the Defender for IoT agentless solution. Microsoft Defender for IoT offers two sets of capabilities to fit your environment's needs, agentless solution for organizations, and agent-based solution for device builders. This article provides architectural information about the agentless solution for organizations.
+The Microsoft Defender for IoT system is built to provide broad coverage and visibility from diverse data sources.
-## Agentless solution architecture for organizations
-### Defender for IoT components
+The following image shows how data can stream into Defender for IoT from network sensors, Microsoft Defender for Endpoint, and third party sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
-Defender for IoT connects both to the Azure cloud and to on-premises components. The solution is designed for scalability in large and geographically distributed environments with multiple remote locations. This solution enables a multi-layered distributed architecture by country, region, business unit, or zone.
-Microsoft Defender for IoT includes the following components:
+Defender for IoT connects to both cloud and on-premises components, and is built for scalability in large and geographically distributed environments.
-**Cloud connected deployments**
+Defender for IoT systems include the following components:
-- Microsoft Defender for IoT sensor VM or appliance-- Azure portal for cloud management and integration to Microsoft Sentinel-- On-premises management console for local-site management-- An embedded security agent (optional)
+- The Azure portal, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel
+- Network sensors, deployed on either a virtual machine or a physical appliance. You can configure your OT sensors as cloud-connected sensors, or fully on-premises sensors.
+- An on-premises management console for cloud-connected or local, air-gapped site management.
+- An embedded security agent (optional).
-**Air-gapped (Offline) deployments**
+## Network sensors
-- Microsoft Defender for IoT sensor VM or appliance-- On-premises management console for local site management
+Defender for IoT network sensors discover and continuously monitor network traffic on IoT and OT devices.
-### Microsoft Defender for IoT sensors
+- Purpose-built for IoT and OT networks, sensors connect to a SPAN port or network TAP and can provide visibility into IoT and OT risks within minutes of connecting to the network.
-The Defender for IoT sensors discover, and continuously monitor network devices. Sensors collect ICS network traffic using passive (agentless) monitoring on IoT and OT devices.
-
-Purpose-built for IoT and OT networks, the agentless technology delivers deep visibility into IoT and OT risk within minutes of being connected to the network. It has zero performance impact on the network and network devices due to its non-invasive, Network Traffic Analysis (NTA) approach.
-
-Applying patented, IoT and OT-aware behavioral analytics and Layer-7 Deep Packet Inspection (DPI), it allows you to analyze beyond traditional signature-based solutions to immediately detect advanced IoT and OT threats (such as fileless malware) based on anomalous or unauthorized activity.
-
-Defender for IoT sensors connects to a SPAN port or network TAP and immediately begins performing DPI on IoT and OT network traffic.
-
-Data collection, processing, analysis, and alerting takes place directly on the sensor. This process makes it ideally suited for locations with low bandwidth or high latency connectivity, because only metadata is transferred to the management console.
+- Sensors use IoT and OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect IoT and OT threats, such as fileless malware, based on anomalous or unauthorized activity.
-The sensor includes five analytics detection engines. The engines trigger alerts based on analysis of both real-time and pre-recorded traffic. The following engines are available:
+Data collection, processing, analysis, and alerting takes place directly on the sensor, which can be ideal for locations with low bandwidth or high latency connectivity because only metadata is transferred on, either to the Azure portal for cloud management, or an on-premises management console.
-#### Protocol violation detection engine
-The protocol violation detection engine identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and Initiation of an obsolete function code alerts.
+### Cloud-connected vs local sensors
-#### Policy violation detection engine
-Using machine learning, the policy violation detection engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. For example: DeltaV software version changed, and Unauthorized PLC programming alerts. Specifically, the policy violation engine models the ICS networks as deterministic sequences of states and transitionsΓÇöusing a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine establishes a baseline of the ICS networks, so that the platform requires a shorter learning period to build a baseline of the network than generic mathematical approaches or analytics, which were originally developed for IT rather than OT networks.
+Cloud-connected sensors are sensors that are connected to Defender for IoT in Azure, and differ from locally managed sensors as follows:
-#### Industrial malware detection engine
-The industrial malware detection engine identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
+When you have a cloud connected sensor:
-#### Anomaly detection engine
-The anomaly detection engine detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign in attempts, and PLC Scan Detected alerts.
+- All data that the sensor detects is displayed in the sensor console, but alert information is also delivered to Azure, where it can be analyzed and shared with other Azure services.
-#### Operational incident detection
-The operational incident detection detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device is thought to be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
+- Microsoft threat intelligence packages can also be automatically pushed to cloud-connected sensors.
-### Management consoles
-Managing Microsoft Defender for IoT across hybrid environments is accomplished via three management portals:
-- Sensor console-- The on-premises management console-- The Azure portal
+- The sensor name defined during onboarding is the name displayed in the sensor, and is read-only from the sensor console.
-### Sensor console
-Sensor detections are displayed in the sensor console, where they can be viewed, investigated, and analyzed in a network map, device inventory, and in an extensive range of reports, for example risk assessment reports, data mining queries and attack vectors. You can also use the console to view and handle threats detected by sensor engines, forward information to partner systems, manage users, and more.
+In contrast, when working with locally managed sensors:
+- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console. For more information, see [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md).
-### On-premises management console
-The on-premises management console enables security operations center (SOC) operators to manage and analyze alerts aggregated from multiple sensors into one single dashboard and provides an overall view of the health of the OT networks.
+- You must manually upload any threat intelligence packages
-This architecture provides a comprehensive unified view of the network at a SOC level, optimized alert handling, and the control of operational network security, ensuring that decision-making and risk management remain flawless.
+- Sensor names can be updated in the sensor console.
-In addition to multi-tenancy, monitoring, data analysis, and centralized sensor remote control, the management console provides extra system maintenance tools (such as alert exclusion) and fully customized reporting features for each of the remote appliances. This architecture supports both local management at a site level, zone level, and global management within the SOC.
+## Analytics engines
-The management console can be deployed for high-availability configuration, which provides a backup console that periodically receives backups of all configuration files required for recovery. If the primary console fails, the local site management appliances will automatically fail over to synchronize with the backup console to maintain availability without interruption.
+Defender for IoT sensors apply analytics engines on ingested data, triggering alerts based on both real-time and pre-recorded traffic.
-Tightly integrated with your SOC workflows and run books, it enables easy prioritization of mitigation activities and cross-site correlation of threats.
+Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
-- Holistic - reduce complexity with a single unified platform for device management, risk and vulnerability management, and threat monitoring with incident response.
+For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitionsΓÇöusing a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine establishes a baseline of the ICS networks, so that the platform requires a shorter learning period to build a baseline of the network than generic mathematical approaches or analytics, which were originally developed for IT rather than OT networks.
-- Aggregation and correlation ΓÇô display, aggregate, and analyze data and alerts collected from all sites.
+Specifically for OT networks, OT network sensors also provide the following analytics engines:
-- Control all sensors ΓÇô configure and monitor all sensors from a single location.
+- **Protocol violation detection engine**. Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and Initiation of an obsolete function code alerts.
-### Azure portal
+- **Industrial malware detection engine**. Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
-Defender for IoT in the Azure portal in Azure is used to help you:
+- **Anomaly detection engine**. Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign-in attempts, and PLC Scan Detected alerts.
-- Purchase solution appliances
+- **Operational incident detection**. Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device is thought to be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
-- Install and update software
+## Management options
-- Onboard sensors to Azure
+Defender for IoT provides hybrid network support using the following management options:
-- Update Threat Intelligence packages
+- **The Azure portal**. Use the Azure portal as a single pane of glass view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](/azure/sentinel/iot-solution?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended), and more.
+
+ Also use the Azure portal to obtain new appliances and software updates, onboard and maintain your sensors in Defender for IoT, and update threat intelligence packages.
+
+ :::image type="content" source="media/architecture/portal.png" alt-text="Screenshot of the Defender for I O T default view on the Azure portal."lightbox="media/architecture/portal.png":::
+
+- **The sensor console**. You can also view detections for devices connected to a specific sensor from the sensor's console. Use the sensor console to view a network map, an extensive range of reports, forward information to partner systems, and more.
+
+ :::image type="content" source="media/release-notes/new-interface.png" alt-text="Screenshot that shows the updated interface." lightbox="media/release-notes/new-interface.png":::
+
+- **The on-premises management console**. In air-gapped environments, you can get a central view of data from all of your sensors from an on-premises management console. The on-premises management console also provides extra maintenance tools and reporting features.
## Next steps
-[Defender for IoT FAQ](resources-frequently-asked-questions.md)
+For OT environments, understand the supported methods for connecting network sensors to Defender for IoT.
+
+For more information, see:
+
+- [Frequently asked questions](resources-frequently-asked-questions.md)
+- [Sensor connection methods](architecture-connections.md)
+- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
defender-for-iot Concept Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-key-concepts.md
- Title: Key advantages
-description: Learn about basic Defender for IoT concepts.
Previously updated : 11/09/2021---
-# Basic concepts
-
-This article describes key advantages of Microsoft Defender for IoT.
-
-## Rapid non-invasive deployment and passive monitoring
-
-Defender for IoT sensors connects to switch SPAN (Mirror) ports, and network TAPs and immediately begin collecting ICS network traffic via passive (agentless) monitoring. Deep packet inspection (DPI) is used to dissect traffic from both serial and Ethernet control network equipment. Defender for IoT has zero impact on OT networks because it isn't placed in the data path and doesn't actively scan OT devices.
-
-To deliver instant snapshots of detailed Windows device information, Defender for IoT sensor can be configured to supplement passive monitoring with an optional active component. This component uses safe, vendor-approved commands to query Windows devices for device details, as often or as infrequently as you want.
-
-## Embedded knowledge of ICS protocols, devices, and applications
-
-DPI alone is not enough to identify protocol anomalies and identify device at a granular level. The Defender for IoT sensor addresses some of the largest and most complex environments. More than 1,300 OT networks have been analyzed to date, across all industrial sectors.
-
-## Analytics and self-learning engines
-
-Engines identify security issues via continuous monitoring and five analytics engines that incorporate self-learning to eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies. The five engines are:
--- **Protocol violation detection**: Identifies the use of packet structures and field values that violate ICS protocol specifications.--- **Policy violation detection**: Identifies policy violations such as unauthorized use of function codes, access to specific objects, or changes to device configuration.--- **Industrial malware detection**: Identifies behaviors that indicate the presence of known malware such as Conficker, Black Energy, Havex, WannaCry, and NotPetya.--- **Anomaly detection**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the engine uses a patented technique called Industrial Finite State Modeling (IFSM). The solution requires a shorter learning period than generic mathematical approaches or analytics, which were originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives.--- **Operational incident detection**: Identifies operational issues such as intermittent connectivity that can indicate early signs of equipment failure.
-
-Tools are available to enable and disable sensor engines. Alerts are not triggered from engines that are disabled. See [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md).
-
-You can fine-tune detection instructions by working with Smart IT learning. See [Learning and Smart IT Learning modes](how-to-control-what-traffic-is-monitored.md#learning-and-smart-it-learning-modes)
-
-## Detection engines and alerts
-
-Alerts are triggered when sensor engines detect changes in network traffic and behavior that need your attention. This section describes the kind of alerts that each engine triggers.
-
-| Alert type | Description |
-|-|-|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
-| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
-| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but is not defined as a scanning device. |
-
-For more alert information, see:
--- [Manage the alert event](how-to-manage-the-alert-event.md)--- [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)--- [Alert types and descriptions](alert-engine-messages.md)-
-## Network Traffic Analysis for risk and vulnerability assessment
-
-Unique in the industry, Defender for IoT uses proprietary Network Traffic Analysis (NTA) algorithms to passively identify all network and endpoint vulnerabilities, such as:
--- Unauthorized remote access connections-- Rogue or undocumented devices-- Weak authentication-- Vulnerable devices (based on unpatched CVEs)-- Unauthorized bridges between subnets-- Weak firewall rules-
-## Data mining for investigations, forensics, and threat hunting
-
-The platform provides an intuitive data-mining interface for granular searching of historical traffic across all relevant dimensions. Examples include time period, IP address, MAC address, and ports. You can also make protocol-specific queries based on function codes, protocol services, and modules. Full-fidelity PCAPs are available for further drill-down analysis.
-
-## Sensor Cloud Management mode
-
-The Sensor Cloud Management mode determines where device, alert, and other information that the sensor detects is displayed.
-
-For **cloud-connected sensors**, information that the sensor detects is displayed in the sensor console. Alert information is delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel.
-
-For **locally connected sensors**, information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console if the sensor is connected to it.
-
-## Air-gapped networks
-
-If you're working in an air-gapped environment, the on-premises management console in Defender for IoT delivers a real-time view of key IoT and OT risk indicators and alerts across all of your facilities. Tightly integrated with your SOC workflows and runbooks, it enables easy prioritization of mitigation activities and cross-site correlation of threats.
-
-Defender for IoT provides a consolidated view of all your devices. It also provides critical information about the devices, such as type (PLC, RTU, DCS, and more), manufacturer, model, and firmware revision level, as well as alert information.
-
-Defender for IoT enables the effective management of multiple deployments and a comprehensive unified view of the network. Defender for IoT optimizes alert handling and control of operational network security.
-
-The on-premises management console is a web-based administrative platform that lets you monitor and control the activities of global sensor installations. In addition to managing the data received from deployed sensors, the on-premises management console seamlessly integrates data from various business resources: CMDBs, DNS, firewalls, Web APIs, and more.
-
-We recommend that you familiarize yourself with the concepts, capabilities, and features available to sensors before working with the on-premises management console.
-
-## Integrations
-
-You can expand the capabilities of Defender for IoT by sharing both device and alert information with partner systems. Integrations help enterprises bridge previously siloed security solutions to significantly enhance device visibility and threat intelligence. Integrations also help enterprises accelerate the system-wide responses and mitigate risks faster.
-
-Integrations reduce complexity and eliminate IT and OT silos by integrating them into your existing SOC workflows and security stack. For example:
--- SIEMs such as IBM QRadar, Splunk, ArcSight, LogRhythm, and RSA NetWitness--- Security orchestration and ticketing systems such as ServiceNow and IBM Resilient--- Secure remote access solutions such as CyberArk Privileged Session Manager (PSM) and BeyondTrust--- Secure network access control (NAC) systems such as Aruba ClearPass and Forescout CounterACT--- Firewalls such as Fortinet and Check Point-
-## Complete protocol support
-
-In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard. By using the Horizon Open Development Environment (ODE) SDK, developers can create dissector plug-ins that decode network traffic based on defined protocols. Services analyzes traffic to provide complete monitoring, alerting, and reporting. Use Horizon to:
--- Expand visibility and control without the need to upgrade to new versions.--- Secure proprietary information by developing on-site as an external plug-in.--- Localize text for alerts, events, and protocol parameters.-
-In addition, you can use proprietary protocol alerts to communicate information:
--- About traffic detections based on protocols and underlying protocols in a proprietary Horizon plug-in.--- About a combination of protocol fields from all protocol layers. For example, in an environment running MODBUS, you might want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address.-
-Alerts are triggered when Horizon alert rule conditions are met.
-
-In addition, working with Horizon custom alerts lets you write your own alert titles and messages. Resolved protocol fields and values can be embedded in the alert message text.
-
-Using custom, condition-based alert triggering and messaging helps pinpoint specific network activity and effectively update your security, IT, and operational teams.
-
-For a complete list of supported protocols see, [Supported Protocols](concept-supported-protocols.md).
-
-## What is an Inventory Device
-
-The Defender for IoT Device inventory displays an extensive range of asset attributes that are detected by sensors monitoring the organizations networks and managed endpoints.
-
-Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
-
-1. Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)
-1. Devices composed of multiple backplane components (including all racks/slots/modules)
-1. Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-
-Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices.
-Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.
--
-## High availability
-
-Increase the resilience of your Defender for IoT deployment by installing a high-availability appliance in the on-premises management console. High-availability deployments ensure that your managed sensors continuously report to an active on-premises management console.
-
-This deployment is implemented with an on-premises management console pair that includes a primary and secondary appliance.
-
-## Localization
-
-Many console features support an extensive range of languages.
-
-## Next step
-
-[Getting started with Defender for IoT](getting-started.md)
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Title: OT threat monitoring in enterprise SOCs
-description: Learn about how integration with Microsoft Sentinel can help SOC teams bridge the gap between IT and OT security sectors
Previously updated : 01/02/2022-
+ Title: OT threat monitoring in enterprise security operation center (SOC) teams - Microsoft Defender for IoT
+description: Learn about how integration with Microsoft Sentinel can help security operation center teams bridge the gap between IT and OT security.
Last updated : 03/24/2022+ # OT threat monitoring in enterprise SOCs
-This article describes how integration with Microsoft Sentinel can help SOC teams bridge the gap between IT and OT security sectors.
+As more business-critical industries transform their OT systems to digital IT infrastructures, security operation center (SOC) teams and chief information security officers (CISOs) are increasingly responsible for threats from OT networks.
-## About the digital transformation in business-critical industries
+Together with the new responsibilities, SOC teams deal with new challenges, including:
-As the digital transformation in business-critical industries connects OT systems with IT infrastructures, the OT/IT convergence puts data, systems, and safety at risk.
+- **Lack of OT expertise and knowledge** within current SOC teams regarding OT alerts, industrial equipment, protocols, and network behavior. This often translates into vague or minimized understanding of OT incidents and their business impact.
-As a result, CISOs and Security Operations Center (SOC) teams are becoming increasingly responsible for threats from OT network segments that they traditionally did not handle.
+- **Siloed or inefficient communication and processes** between OT and SOC organizations.
-This means SOC teams must deal with various new challenges, including:
+- **Limited technology and tools**, including:
-**People**
+ - Lack of visibility and insight into OT networks.
-- Lack of OT expertise and knowledge within current SOC teams regarding OT alerts, industrial equipment, protocols, and network behavior. This often translates into vague or minimized understanding of OT incidents and their business impact.
+ - Limited insight about events across enterprise IT/OT networks, including tools that don't allow SOC teams to evaluate and link information across data sources in IT/OT environments.
-**Processes**
+ - Low level of automated security remediation for OT networks.
-- Siloed or inefficient communication and processes between OT and SOC organizations.
+ - Costly and time-consuming effort needed to integrate OT security solutions into existing SOC solutions.
-**Technology and Tools**
+Without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
-- Lack of visibility and insight into OT networks.
+## Integrate Defender for IoT and Microsoft Sentinel
-- Limited insight about events across enterprise IT/OT networks, including tools that don't allow SOC teams to evaluate and link information across data sources in IT/OT environments. --- Low level of automated security remediation for OT networks.--- Costly and time-consuming effort needed to integrate OT security solutions into existing SOC solutions. -
-Without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
-
-## About Microsoft Sentinel
-
-Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and, security orchestration automated response (SOAR) solution that lets users:
--- Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.--- Detect previously undetected threats and minimize false positives using Microsoft's analytics and unparalleled threat intelligence.--- Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.--- Respond to incidents rapidly with built-in orchestration and automation of common tasks.-
-## About the Defender for IoT and Microsoft Sentinel Integration
-
-By bringing rich telemetry into Microsoft Sentinel from Microsoft Defender for IoT, SOC teams can bridge the gap between IT and OT security sectors. This allows SOC teams to detect and respond faster during the entire attack timelineΓÇöenhancing communication, processes, and response time for both security analysts and OT personnel.
--
-To set up the integration, see [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
-
-## OT Security
-
-This section describes how the integration helps you handled OT threats.
-
-**OT Security Alerts**
+Microsoft Sentinel is a scalable cloud solution for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams to help them efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
-Once ingested into Sentinel, security experts can work with IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to MITRE ATT&CK for ICS.
+Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **IoT OT Threat Monitoring with Defender for IoT** solution.
-**MITRE ATT&CK for ICS**
+The **IoT OT Threat Monitoring with Defender for IoT** solution installs out-of-the-box security content to your Microsoft Sentinel, including analytics rules to automatically open incidents, workbooks to visualize and monitor data, and playbooks to automate response actions
-MITRE ATT&CK® for ICS is a knowledge base used for describing the actions an adversary may take while operating within an ICS network. The knowledge base can be used to better characterize and describe post-compromise adversary behavior.
-
-The Microsoft Defender for IoT integration delivers a library of mappings that link Microsoft Sentinel incidents to MITRE ATT&CK for ICS tactics.
--
-## Workbooks, analytics rules, and SOAR playbooks
-
-This section describes how Microsoft Sentinel workbooks, analytics rules, and SOAR playbooks help you monitor and respond to OT threats.
+Once Defender for IoT data is ingested into Microsoft Sentinel, security experts can work with IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
### Workbooks
-To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
+To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the **IoT OT Threat Monitoring with Defender for IoT** solution.
Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-For example:
+For example, workbooks can display alerts by any of the following dimensions:
-**Alert workbooks**
-
-Microsoft Sentinel Alert Workbooks show alerts by:
-- Type (policy violation, protocol violation, malware, etc.)
+- Type, such as policy violation, protocol violation, malware, and so on
- Severity-- OT device type (PLC, HMI, engineering workstation, etc.)
+- OT device type, such as PLC, HMI, engineering workstation, and so on
- OT equipment vendor - Alerts over time
-**MITRE ATT&CK for ICS Workbook**
-
-Microsoft Sentinel MITRE ATT&CK for ICS workbooks show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period.
--
-The Workbooks are described in the [Visualize and monitor Defender for IoT data](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
-section of the integration tutorial. Workbooks are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
+Workbooks also show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period. For example:
-### Analytics rules
-
-Create Microsoft Sentinel incidents for relevant alerts generated by Defender for IoT, either by using out-of-the-box analytics rules provided in the IoT OT Threat Monitoring with Defender for IoT solution, configuring analytics rules manually, or by configuring your data connector to automatically create incidents for all alerts generated by Defender for IoT.
-
-The Analytics rules are described in the [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial. The rules are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
### SOAR playbooks Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response. It can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
-Use SOAR playbooks, for example to:
+For example, use SOAR playbooks to:
- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This can either be an unauthorized device that can be used by adversaries to reprogram PLCs. - Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
-The playbooks are described in the [Automate response to Defender for IoT alerts](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial.
+## Integrated incident timeline
+
+The following table shows how both the OT team, on the Defender for IoT side, and the SOC team, on the Microsoft Sentinel side, can detect and respond to threats fast across the entire attack timeline.
+
+|Microsoft Sentinel |Step |Defender for IoT |
+||||
+| | **OT alert triggered** | High confidence OT alerts, powered by Defender for IoT's *Section 52* security research group, are triggered based on data ingested to Defender for IoT. |
+|Analytics rules automatically open incidents *only* for relevant use cases, avoiding OT alert fatigue | **OT incident created** | |
+|SOC teams map business impact, including data about the site, line, compromised assets, and OT owners | **OT incident business impact mapping** | |
+|SOC teams move the incident to *Active* and start investigating, using network connections and events, workbooks, and the OT device entity page | **OT incident investigation** | Alerts are moved to *Active*, and OT teams investigate using PCAP data, detailed reports, and other device details |
+|SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed |
+|After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert |
-Playbooks are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
## Next steps -- [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+For more information, see:
+- [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
- [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/detect-threats-custom.md)- - [Tutorial Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
Title: Connect sensors to Microsoft Defender for IoT
-description: Learn how to connect your sensors to Microsoft Defender for IoT on Azure
+ Title: Connect OT sensors to Microsoft Defender for IoT in the cloud
+description: Learn how to connect your Microsoft Defender for IoT OT sensors to the cloud
Last updated 03/13/2022
-# Connect your sensors to Microsoft Defender for IoT
+# Connect your OT sensors to the cloud
This article describes how to connect your sensors to the Defender for IoT portal in Azure.
If you already have a proxy set up in your Azure VNET, you can start working wit
1. Toggle on the **Enable Proxy** option and define your proxy host, port, username, and password.
-If you do not yet have a proxy configured in your Azure VNET, use the following procedures to configure your proxy:
+If you don't yet have a proxy configured in your Azure VNET, use the following procedures to configure your proxy:
1. [Define a storage account for NSG logs](#step-1-define-a-storage-account-for-nsg-logs)
For more information, see:
Define an Azure virtual machine scale set to create and manage a group of load-balanced virtual machine, where you can automatically increase or decrease the number of virtual machines as needed.
-Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are Virtual Machine scale sets?](/azure/virtual-machine-scale-sets/overview)
+Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are virtual machine scale sets?](/azure/virtual-machine-scale-sets/overview)
1. Create a scale set with the following parameter definitions:
Use the following procedure to create a scale set to use with your sensor connec
Keep the default value for **Disks** settings.
-1. Create a network interface in the `Proxyserver` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), but do not yet define a load balancer.
+1. Create a network interface in the `Proxyserver` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), but don't yet define a load balancer.
1. Define your scaling settings as follows:
Use the following procedure to create a scale set to use with your sensor connec
1. For the custom data script, do the following:
- 1. Create the following configuration script, depending on the port and services you are using:
+ 1. Create the following configuration script, depending on the port and services you're using:
```txt # Recommended minimum configuration:
To create an Azure load balancer for your sensor connection:
1. Define a dynamic frontend IP address in the `proxysrv` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), setting the availability to zone-redundant.
-1. For a backend, choose the VM scale set you created in the [earlier](#step-5-define-an-azure-virtual-machine-scale-set).
+1. For a backend, choose the virtual machine scale set you created in the [earlier](#step-5-define-an-azure-virtual-machine-scale-set).
1. On the port defined in the sensor, create a TCP load balancing rule connecting the frontend IP address with the backend pool. The default port is 3128.
To create an Azure load balancer for your sensor connection:
1. Define your load balancer logging:
- 1. In the Azure portal, go to the load balancer you've just created.
+ 1. In the Azure portal, go to the load balancer you've created.
1. Select **Diagnostic setting** > **Add diagnostic setting**.
For more information, see [Proxy connections with proxy chaining](architecture-c
Before you start, make sure that you have a host server running a proxy process within the site network. The proxy process must be accessible to both the sensor and the next proxy in the chain.
-We have validated this procedure using the open-source [Squid](http://www.squid-cache.org/) proxy. This proxy uses HTTP tunneling and the HTTP CONNECT command for connectivity. Any other proxy chaining connection that supports the CONNECT command can be used for this connection method.
+We've validated this procedure using the open-source [Squid](http://www.squid-cache.org/) proxy. This proxy uses HTTP tunneling and the HTTP CONNECT command for connectivity. Any other proxy chaining connection that supports the CONNECT command can be used for this connection method.
> [!IMPORTANT] > Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service.
Before you start:
:::image type="content" source="media/architecture-connections/multi-cloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
- - **Use public IP addresses over the internet** if you do not need to exchange data using private IP addresses
+ - **Use public IP addresses over the internet** if you don't need to exchange data using private IP addresses
- - **Use site-to-site VPN over the internet** only if you do *not* require any of the following:
+ - **Use site-to-site VPN over the internet** only if you don't* require any of the following:
- Predictable throughput - SLA
Before you start:
In this case: - If you want to own and manage the routers making the connection, use ExpressRoute with customer-managed routing.
- - If you do not need to own and manage the routers making the connection, use ExpressRoute with a cloud exchange provider.
+ - If you don't need to own and manage the routers making the connection, use ExpressRoute with a cloud exchange provider.
### Configuration 1. Configure your sensor to connect to the cloud using one of the Azure Cloud Adoption Framework recommended methods. For more information, see [Connectivity to other cloud providers](/azure/cloud-adoption-framework/ready/azure-best-practices/connectivity-to-other-providers).
-1. To enable private connectivity between your VPCs and Defender for IoT, connect your VPC to an Azure VNET over a VPN connection. For example if you are connecting from an AWS VPC, see our TechCommunity blog: [How to create a VPN between Azure and AWS using only managed solutions](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-create-a-vpn-between-azure-and-aws-using-only-managed/ba-p/2281900).
+1. To enable private connectivity between your VPCs and Defender for IoT, connect your VPC to an Azure VNET over a VPN connection. For example if you're connecting from an AWS VPC, see our TechCommunity blog: [How to create a VPN between Azure and AWS using only managed solutions](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-create-a-vpn-between-azure-and-aws-using-only-managed/ba-p/2281900).
1. After your VPC and VNET are configured, connect to Defender for IoT as you would when connecting via an Azure proxy. For more information, see [Connect via an Azure proxy](#connect-via-an-azure-proxy).
If you're an existing customer with a production deployment and sensors connecte
1. **Determine which connection method is right** for each production site. For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method).
-1. **Configure any additional resources required** as described in the procedure in this article for your chosen connectivity method. For example, additional resources might include a proxy, VPN, or ExpressRoute.
+1. **Configure any other resources required** as described in the procedure in this article for your chosen connectivity method. For example, other resources might include a proxy, VPN, or ExpressRoute.
For any connectivity resources outside of Defender for IoT, such as a VPN or proxy, consult with Microsoft solution architects to ensure correct configurations, security, and high availability.
If you're an existing customer with a production deployment and sensors connecte
1. **Create a plan of action for your migration**, including planning any maintenance windows needed.
-1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete are not used by any other
+1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete aren't used by any other
- If you've upgraded your versions, make sure that all updated sensors indicate software version 22.1.x or higher.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Title: 'Quickstart: Getting started'
-description: In this quickstart, learn how to get started with understanding the basic workflow for Defender for IoT deployment.
+ Title: Get started with Microsoft Defender for IoT
+description: In this quickstart, set up a trial for Microsoft Defender for IoT and understand next steps required to configure your network sensors.
Previously updated : 12/02/2021- Last updated : 03/24/2022 # Quickstart: Get started with Defender for IoT
-This article provides an overview of the steps you'll take to set up Microsoft Defender for IoT. The process requires that you:
+This quickstart takes you through the initial steps of setting up Defender for IoT, including:
-- Register your subscription and sensors on Defender for IoT in the Azure portal.-- Install the sensor and on-premises management console software.-- Perform initial activation of the sensor and management console.
+- Add an Azure subscription to Defender for IoT
+- Identify and plan solution architecture
-## Prerequisites
-
-Here's what you need to get started with Defender for IoT.
--- Network switches that support traffic monitoring via SPAN port.-- Hardware appliances for NTA sensors.-- The Azure Subscription Contributor role. It's required only during onboarding for defining committed devices and connection to Microsoft Sentinel.-
-If you are using a Defender for IoT sensor version lower than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
-
-### Supported service regions
-
-Defender for IoT routes all traffic from all European regions to the West Europe regional datacenter. It routes traffic from all remaining regions to the Central US regional datacenter.
+You can use this procedure to set up a Defender for IoT trial. The trial provides 30-day support for 1000 devices and a virtual sensor, which you can use to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities and more.
-If you are connecting your sensors using an IoT Hub (legacy), see also the [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
+## Prerequisites
+Before you start, make sure that you have:
-### Permissions: Sensors and on-premises management consoles
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
-Some of the setup steps require specific user permissions.
+- Access to an Azure subscription with the **Subscription Contributor** role.
-Administrative user permissions are required to activate the sensor and management console, upload SSL/TLS certificates, and generate new passwords.
+If you're using a Defender for IoT sensor version earlier than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
-### Permissions: Defender for IoT in the Azure portal
+### Permissions
-The following table describes user access permissions to Azure portal tools:
+Defender for IoT users require the following permissions:
| Permission | Security reader | Security admin | Subscription contributor | Subscription owner | |--|--|--|--|--|
-| View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Onboard sensors | | Γ£ô | Γ£ô | Γ£ô |
| Onboard subscriptions and update committed devices | | Γ£ô | Γ£ô | Γ£ô |
+| Onboard sensors | | Γ£ô | Γ£ô | Γ£ô |
+| View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Recover passwords | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-## Identify the solution infrastructure
-
-**Clarify your network setup needs**
-
-Research your:
--- Network architecture-- Monitored bandwidth-- Requirements for creating certificates-- Other network details.-
-For more information, see [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md).
-
-**Clarify which sensor appliances are required to handle the network load**
-
-Microsoft Defender for IoT supports both physical and virtual deployments. For the physical deployments, you can purchase various certified appliances. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
-
-We recommend that you calculate the approximate number of devices that will be monitored. Later, when you register your Azure subscription to the portal, you'll be asked to enter this number. Numbers can be added in intervals of 1,000, for example 1000, 2000, 3000. The numbers of monitored devices are called *committed devices*.
-
-If you are using a Defender for IoT sensor version lower than 22.1.x, you must also clarify your appliances for the on-premises management console.
-## Register with Microsoft Defender for IoT
-
-Registration includes:
--- Onboarding your Azure subscriptions to Defender for IoT.-- Defining committed devices.-- Downloading an activation file for the on-premises management console.-
-You can also use a trial subscription to monitor 1000 devices for free for 30 days. See [Onboard a trial subscription](how-to-manage-subscriptions.md#onboard-a-trial-subscription) for more information.
-
-**To register**:
-
-1. Go to the [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
-
-1. Select **Onboard subscription**.
-
-1. On the **Pricing** page, select a subscription or create a new one, and add the number of committed devices.
-
-1. Select the **Download the on-premises management console** tab and save the downloaded activation file. This file contains the aggregate committed devices that you defined. The file will be uploaded to the management console after initial sign-in.
-
-For information on how to offboard a subscription, see [Offboard a subscription](how-to-manage-subscriptions.md#offboard-a-subscription).
-
-## Install and set up the on-premises management console
-
-This section is required only when you are using a Defender for IoT sensor version lower than 22.1.x.
-
-After you acquire your on-premises management console appliance:
--- Download the ISO package from the Azure portal.-- Install the software.-- Activate and carry out initial management console setup.-
-**To install and set up**:
-
-1. Go to [Defender for IoT: Getting Started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
-
-1. Select the **On-premises management console** tab.
-
-1. Choose a version and select **Download**.
-
-1. Install the on-premises management console software. For more information, see [Defender for IoT installation](how-to-install-software.md).
-
-1. Activate and set up the management console. For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
+### Supported service regions
-## Onboard a sensor
+Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *Central US* regional datacenter.
-Onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file:
+If you're using a legacy version of the sensor traffic are connecting sensors through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
-1. Define a sensor name and associate it with a subscription.
+## Identify and plan your OT solution architecture
-1. Choose a sensor connection mode:
+If you're working with an OT network, we recommend that you identify system requirements and plan your system architecture before you start, even if you plan to start with a trial subscription.
- - **Cloud connected sensors**: Information that sensors detect is displayed in the sensor console. In addition, alert information is delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel. You can also choose to automatically push threat intelligence packages from Defender for IoT to your sensors. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+If you're setting up network monitoring for enterprise IoT systems, you can skip directly to [Add a subscription to Defender for IoT](#add-a-subscription-to-defender-for-iot).
- - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
+**When working with an OT network**:
-1. Select a site to associate your sensor to. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
+- To deploy Defender for IoT, you'll need network switches that support traffic monitoring via a SPAN port and hardware appliances for NTA sensors.
-1. Select **Register**.
+ For on-premises machines, including network sensors and on-premises management consoles for air-gapped environments, you'll need administrative user permissions for activities such as activation, managing SSL/TLS certificates, managing passwords, and so on.
-1. Select **Download activation file**.
+- Research your own network architecture and monitor bandwidth. Check requirements for creating certificates and other network details, and clarify the sensor appliances you'll need for your own network load.
-For details about onboarding, see [Onboard and manage sensors with Defender for IoT](how-to-manage-sensors-on-the-cloud.md).
+ Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **1,000**, such as **1000**, **2000**, **3000**. The numbers of monitored devices are called *committed devices*.
-## Install and set up the sensor
+Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified appliances with software pre-installed, or download software to install yourself.
-Download the ISO package from the Azure portal, install the software, and set up the sensor.
+For more information, see:
-1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
+- [Best practices for planning your OT network monitoring](plan-network-monitoring.md)
+- [Sensor connection methods](architecture-connections.md)
+- [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
+- [Predeployment checklist](pre-deployment-checklist.md)
+- [Identify required appliances](how-to-identify-required-appliances.md).
-1. Select **Set up sensor**.
+## Add a subscription to Defender for IoT
-1. Choose a version and select **Download**.
+This procedure describes how to add a new Azure subscription to Defender for IoT. If you're planning to monitor both OT and enterprise IoT networks, we recommend adding separate subscriptions.
-1. Install the sensor software. For more information, see [Defender for IoT installation](how-to-install-software.md).
+**To add your subscription**
-1. Activate and set up your sensor. For more information, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md).
+1. In the Azure portal, go to **Defender for IoT** > **Pricing**.
-## Connect sensors to Defender for IoT
+1. Select **Add** to add a new subscription, and then define the following values:
-This section is required only when you are using a Defender for IoT sensor version 22.1.x or higher.
+ - **Purchase method**. Select a monthly or annual commitment, or a trial. Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
-Connect your sensors to Defender for IoT to ensure that sensors send alert and device inventory information to Defender for IoT on the Azure portal.
+ For more information, see the **Microsoft Defender for IoT** section of the [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
-.
+ - **Subscription**. Select a subscription where you have a **Subscription Contributor** role.
-## Connect sensors to an on-premises management console
+ - **Committed devices**. If you selected a monthly or annual commitment, enter the number of devices you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 1000 devices.
-Connect sensors to the management console to ensure that:
+1. Select the **I accept the terms** option, and then select **Save**.
-- Sensors send alert and device inventory information to the on-premises management console.
+Your subscription is shown in the **Pricing** grid. For example:
-- The on-premises management console can perform sensor backups, manage alerts that sensors detect, investigate sensor disconnections, and carry out other activity on connected sensors.
-We recommend that you group multiple sensors monitoring the same networks in one zone. Doing this will coalesce information collected by multiple sensors.
+For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
-For more information, see [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console).
+## Next steps
+Continue with one of the following tutorials, depending on whether you're setting up a network for OT system security or Enterprise IoT system security:
-## Next steps ##
+- [Tutorial: Get started with OT network security](tutorial-onboarding.md)
+- [Tutorial: Get started with Enterprise IoT network security](tutorial-getting-started-eiot-sensor.md)
-[Welcome to Microsoft Defender for IoT](overview.md)
+For more information, see:
-[Microsoft Defender for IoT architecture](architecture.md)
+- [Welcome to Microsoft Defender for IoT for organizations](overview.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
For more information about working with certificates, see [Manage certificates](
1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign in page.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign-in page.":::
1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
For more information about working with certificates, see [Manage certificates](
1. Enable the **Import trusted CA certificate (recommended)** toggle. 1. Define a certificate name. 1. Upload the Key, CRT, and PEM files.
-1. Enter a passphrase and upload a PEM file if required.
+1. Enter a passphrase and upload a PEM file if necessary.
1. It's recommended to select **Enable certificate validation** to validate the connections between management console and connected sensors. 1. Select **Finish**.
For users with versions prior to 10.0, your license may expire, and the followin
After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file or certificate definition. You only need your sign-in credentials.
-After your sign in, the Microsoft Defender for IoT sensor console opens.
+After your sign-in, the Microsoft Defender for IoT sensor console opens.
:::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot of the initial sensor console dashboard Overview page." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png"::: ## Initial setup and learning (for administrators)
-After your first sign in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
+After your first sign-in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
Initially this activity is carried out in the Learning mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
Before you sign in, verify that you have:
- The sensor IP address. - Sign in credentials that your administrator provided.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of the sensor sign in page after the initial setup.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of the sensor sign-in page after the initial setup.":::
## Console tools: Overview
You can access console tools from the side menu. Tools help you:
| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. | | Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). | | Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md#investigate-sensor-detections-in-an-inventory).|
-| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that require your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).|
+| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).|
### Analyze
You can access console tools from the side menu. Tools help you:
## Review system messages
- System messages provide general information about your sensor that may require your attention, for example if:
- - your sensor activation file is expired or will expire soon
- - your sensor isn't detecting traffic
+System messages provide general information about your sensor that may require your attention, for example if:
+
+- your sensor activation file is expired or will expire soon
+- your sensor isn't detecting traffic
- your sensor SSL certificate is expired or will expire soon
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/system-messages.png" alt-text="Screenshot of the System messages area on the sensor console page, displayed after selecting the bell icon.":::
**To review system messages:** 1. Sign into the sensor
For more information, see:
- [Threat intelligence research and packages ](how-to-work-with-threat-intelligence-packages.md) -- [Onboard a sensor](getting-started.md#onboard-a-sensor)
+- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor)
- [Manage sensor activation files](how-to-manage-individual-sensors.md#manage-sensor-activation-files)
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
Sensors automatically perform deep packet detection for IT and OT traffic and resolve information about network devices, such as device attributes and behavior. Several tools are available to control the type of traffic that each sensor detects.
+## Analytics and self-learning engines
+
+Engines identify security issues via continuous monitoring and five analytics engines that incorporate self-learning to eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies. The five engines are:
+
+- **Protocol violation detection**: Identifies the use of packet structures and field values that violate ICS protocol specifications.
+
+- **Policy violation detection**: Identifies policy violations such as unauthorized use of function codes, access to specific objects, or changes to device configuration.
+
+- **Industrial malware detection**: Identifies behaviors that indicate the presence of known malware such as Conficker, Black Energy, Havex, WannaCry, and NotPetya.
+
+- **Anomaly detection**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the engine uses a patented technique called Industrial Finite State Modeling (IFSM). The solution requires a shorter learning period than generic mathematical approaches or analytics, which were originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives.
+
+- **Operational incident detection**: Identifies operational issues such as intermittent connectivity that can indicate early signs of equipment failure.
+ ## Learning and Smart IT Learning modes The Learning mode instructs your sensor to learn your network's usual activity. Examples are devices discovered in your network, protocols detected in the network, file transfers between specific devices, and more. This activity becomes your network baseline.
The learning capabilities (Learning and Smart IT Learning) are enabled by defaul
**To enable or disable learning:**
-1. Select **System settings** > **Network monitoring** > **Detection Engines and Network Modelling**.
+1. Select **System settings** > **Network monitoring** > **Detection Engines and Network Modeling**.
1. Enable or disable the **Learning** and **Smart IT Learning** options.
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-identify-required-appliances.md
This section provides an overview of physical sensor models that are available.
- **About bringing your own appliance**: Review the supported models described below. After you've acquired your appliance, go to **Defender for IoT** > **Getting started** > **Sensor**. Under **Purchase an appliance and install software**, select **Download**.
- :::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Screenshot for sensor software download.":::
+ :::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Network sensors ISO.":::
> [!NOTE] > <a name="anchortext"></a>For each model, bandwidth capacity can vary, depending on the distribution of protocols.
+For more information about each model, see [Appliance specifications](#appliance-specifications).
#### Corporate sensors
This section provides an overview of physical sensor models that are available.
### Virtual sensors
-This section provides describes virtual sensors that are available.
+This section describes virtual sensors that are available.
| Deployment type | Corporate | Enterprise | SMB | |--|--|--|--|
This section details additional appliances that were certified by Microsoft but
After you purchase the appliance, go to **Defender for IoT** > **Network Sensors ISO** > **Installation** to download the software. ## Next steps
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Defender for IoT alerts lets you enhance the security and operation of your netw
- Protocol and operational anomalies - Suspected malware traffic Alerts triggered by Defender for IoT are displayed on the Alerts page in the Azure portal. Use the Alerts page to: - Learn when an alert was detected. - Investigate the alert by reviewing an extensive range of alert information. This may include, source and destination details, PCAP information, vendor, firmware and OS details, and MITRE ATT&CK information. - Manage the alert by taking remediation steps on the device or network process, or changing the device status or severity.-- Integrate alert details with other Microsoft services. For example, with Microsoft Sentinel playbooks and workbooks. See [About the Defender for IoT and Microsoft Sentinel Integration](concept-sentinel-integration.md#about-the-defender-for-iot-and-microsoft-sentinel-integration).
+- Integrate alert details with other Microsoft services. For example, with Microsoft Sentinel playbooks and workbooks. See [About the Defender for IoT and Microsoft Sentinel Integration](concept-sentinel-integration.md).
### How is the Alerts page populated?
Users working with alerts in Azure and on-premises should understand how alert m
Parameter | Description |--|--|
-| **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console impact the rules detected by managed sensors. As a result, the alerts excluded be these rules won't be displayed in the Alerts page. See [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules) for more information.
-| **Managing alerts on-premises** | Alerts **Learned**, **Acknowledged**, or **Muted** in the on-premises management console or in sensors aren't simultaneously updated in Alerts page on the Defender for IoT Cloud Alerts page. This means that this alert will stay open on the Cloud. However another alert will not be triggered from the on-premises components for this activity.
-| **Managing alert in the portal Alerts page** | Changing the status of an alert to **New**, **Active**, or **Closed** on the Alerts page or changing the alert severity on the Alerts page doesn't impact the alert status or severity in the on-premises management console or sensors.
+| **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console affect the rules detected by managed sensors. As a result, the alerts excluded be these rules won't be displayed in the Alerts page. See [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules) for more information.
+| **Managing alerts on-premises** | Alerts **Learned**, **Acknowledged**, or **Muted** in the on-premises management console or in sensors aren't simultaneously updated in Alerts page on the Defender for IoT Cloud Alerts page. This means that this alert will stay open on the Cloud. However another alert won't be triggered from the on-premises components for this activity.
+| **Managing alert in the portal Alerts page** | Changing the status of an alert to **New**, **Active**, or **Closed** on the Alerts page or changing the alert severity on the Alerts page doesn't affect the alert status or severity in the on-premises management console or sensors.
## Next steps
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
If you're upgrading an on-premises management console and managed sensors, first
**To update several sensors**:
-1. Verify that you've already updated the on-premises management console to the version that you're updating the sensors. For more information see [Update the software version](how-to-manage-the-on-premises-management-console.md#update-the-software-version).
+1. Verify that you've already updated the on-premises management console to the version that you're updating the sensors. For more information, see [Update the software version](how-to-manage-the-on-premises-management-console.md#update-the-software-version).
1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors with Defender for IoT in the Azure portal description: Learn how to onboard, view, and manage sensors with Defender for IoT in the Azure portal. Previously updated : 11/09/2021 Last updated : 03/30/2022
This article describes how to onboard, view, and manage sensors with [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+## Purchase sensors or download software for sensors
+
+This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances.
+
+1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
+
+1. Do one of the following:
+
+ - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This open an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances.
+
+ - To install software on your own appliances, do the following:
+
+ 1. Make sure that you have a supported appliance available. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
+
+ 1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
+
+ 1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
+
+ 1. Install your software. For more information, see [Defender for IoT installation](how-to-install-software.md).
+ ## Onboard sensors Onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file.
In such cases, do the following:
### Reactivate a sensor for upgrades to version 22.x from a legacy version
-If you are updating your sensor version from a legacy version to 22.1.x or higher, you'll need a somewhat different activation procedure than for earlier releases.
+If you're updating your sensor version from a legacy version to 22.1.x or higher, you'll need a different activation procedure than for earlier releases.
Make sure that you've started with the relevant updates steps for this update. For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version).
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
This article covers on-premises management console options like backup and resto
You onboard the on-premises management console from the Azure portal.
+## Download software for the on-premises management console
+
+This procedure describes how to use the Azure portal to download software for you to install on your own appliances for an on-premises management console.
+
+1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **On-premises management console**.
+
+1. Make sure that you have a supported appliance available. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
+
+1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
+
+1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
+
+1. Install your software. For more information, see [Defender for IoT installation](how-to-install-software.md).
+ ## Upload an activation file When you first sign in, an activation file for the on-premises management console is downloaded. This file contains the aggregate committed devices that are defined during the onboarding process. The list includes sensors associated with multiple subscriptions.
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Title: Set up your network
+ Title: Prepare your OT network for Microsoft Defender for IoT
description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Microsoft Defender for IoT appliances. Last updated 02/22/2022
-# About Microsoft Defender for IoT network setup
+# Prepare your OT network for Microsoft Defender for IoT
-Microsoft Defender for IoT delivers continuous ICS threat monitoring and device discovery. The platform includes the following components:
+This article describes how to set up your OT network to work with Microsoft Defender for IoT components, including the OT network sensors, the Azure portal, and an optional on-premises management console.
-**Defender for IoT sensors:** Sensors collect ICS network traffic by using passive (agentless) monitoring. Passive and nonintrusive, the sensors have zero performance impact on OT and IoT networks and devices. The sensor connects to a SPAN port or network TAP and immediately begins monitoring your network. Detections are displayed in the sensor console. There, you can view, investigate, and analyze them in a network map, a device inventory, and an extensive range of reports. Examples include risk assessment reports, data mining queries, and attack vectors.
+OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into OT/ICS/IoT risks. Sensors carry out data collection, analysis and alerting on-site, making them ideal for locations with low bandwidth or high latency.
-**Defender for IoT on-premises management console**: The on-premises management console provides a consolidated view of all network devices. It delivers a real-time view of key OT and IoT risk indicators and alerts across all your facilities. Tightly integrated with your SOC workflows and playbooks, it enables easy prioritization of mitigation activities and cross-site correlation of threats.
+This article is intended for personnel experienced in operating and managing OT and IoT networks, such as automation engineers, plant managers, OT network infrastructure service providers, cybersecurity teams, CISOs, and CIOs.
-**Defender for IoT in the Azure portal:** The Defender for IoT application can help you purchase solution appliances, install and update software, and update TI packages.
-
-This article provides information about solution architecture, network preparation, prerequisites, and more to help you successfully set up your network to work with Defender for IoT appliances. Readers working with the information in this article should be experienced in operating and managing OT and IoT networks. Examples include automation engineers, plant managers, OT network infrastructure service providers, cybersecurity teams, CISOs, or CIOs.
+We recommend that you use this article together with our [pre-deployment checklist](pre-deployment-checklist.md).
For assistance or support, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-## On-site deployment tasks
-
-Site deployment tasks include:
+## Prerequisites
-- [Collect site information](#collect-site-information)
+Before performing the procedures in this article, make sure that you understand your own network architecture and how you'll connect to Defender for IoT. For more information, see:
-- [Prepare a configuration workstation](#prepare-a-configuration-workstation)
+- [Microsoft Defender for IoT system architecture](architecture.md)
+- [Sensor connection methods](architecture-connections.md)
+- [Best practices for planning your OT network monitoring](plan-network-monitoring.md)
-- [Set up Certificates](#set-up-certificates)
+## On-site deployment tasks
-- [Prepare a configuration workstation](#prepare-a-configuration-workstation)
+Perform the steps in this section before deploying Defender for IoT on your network.
-- [Plan rack installation](#plan-rack-installation)
+Make sure to perform each step methodologically, requesting the information and reviewing the data you receive. Prepare and configure your site and then validate your configuration.
### Collect site information
-Record site information such as:
+Record the following site information:
- Sensor management network information.
Record site information such as:
- DNS servers (optional). Prepare your DNS server's IP and host name.
-For a detailed list and description of important site information, see [Predeployment checklist](#predeployment-checklist).
-
-#### Successful monitoring guidelines
-
-To find the optimal place to connect the appliance in each of your production networks, we recommend that you follow this procedure:
-- ### Prepare a configuration workstation
-Prepare a Windows workstation, including the following:
--- Connectivity to the sensor management interface.
+**To prepare a Windows or Mac workstation**:
-- A supported browser
+- Make sure that you can connect to the sensor management interface.
-- Terminal software, such as PuTTY.
+- Make sure that you have a supported browser. Supported browsers include terminal software, such as PuTTY, or the latest versions of Microsoft Edge, Chrome, Firefox, or Safari (Mac only).
-Make sure the required firewall rules are open on the workstation. See [Network access requirements](#network-access-requirements) for details.
+ For more information, see [recommended browsers for the Azure portal](../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers).
-#### Supported browsers
+- <a name="networking-requirements"></a>Make sure the required firewall rules are open on the workstation. Verify that your organizational security policy allows access as required. For more information, see [Networking requirements](#networking-requirements).
-The following browsers are supported for the sensors and on-premises management console web applications:
--- Microsoft Edge (latest version)--- Safari (latest version, Mac only)--- Chrome (latest version)--- Firefox (latest version)-
-For more information on supported browsers, see [recommended browsers](../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers).
### Set up certificates
-Following sensor and on-premises management console installation, a local self-signed certificate is generated and used to access the sensor web application. When signing in to Defender for IoT for the first time, Administrator users are prompted to provide an SSL/TLS certificate. In addition, an option to validate to this certificate as well other system certificates is automatically is enabled. See [About Certificates](how-to-deploy-certificates.md) for details.
-
-### Network access requirements
-
-Verify that your organizational security policy allows access to the following:
-
-#### User access to the sensor and management console
+After you've installed Defender for IoT sensor and/or on-premises management console software, a local, self-signed certificate is generated
+and used to access the sensor web application.
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| SSH | TCP | In/Out | 22 | CLI | To access the CLI. | Client | Sensor and on-premises management console |
-| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console. | Access to Web console | Client | Sensor and on-premises management console |
-
-#### Sensor access to Azure portal
-
-| Protocol | Transport | In/Out | Port | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net` |
+The first time you sign in to Defender for IoT, administrator users are prompted to provide an SSL/TLS certificate. Optional certificate validation is enabled by default.
-#### Sensor access to the on-premises management console
-
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console. | Sensor | On-premises management console |
-| SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
-
-#### Additional firewall rules for external services (optional)
-
-Open these ports to allow extra services for Defender for IoT.
+We recommend having your certificates ready before you start your deployment. For more information, see [Defender for IoT installation](how-to-install-software.md) and [About Certificates](how-to-deploy-certificates.md).
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events. | Sensor and On-premises management console | Email server |
-| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port. | On-premises management console and Sensor | DNS server |
-| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server |
-| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring. | Sensor | Relevant network element |
-| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health. | On-premises management console and Sensor | SNMP server |
-| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to log in to the system. | On-premises management console and Sensor | LDAP server |
-| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server |
-| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server. | On-premises management console and Sensor | Syslog server |
-| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to log in to the system. | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
### Plan rack installation
-To plan your rack installation:
+**To plan your rack installation**:
1. Prepare a monitor and a keyboard for your appliance network settings.
To plan your rack installation:
1. Open all the relevant firewall ports.
-## About passive network monitoring
-
-The appliance receives traffic from multiple sources, either by switch mirror ports (SPAN ports) or by network TAPs. The management port is connected to the business, corporate, or sensor management network with connectivity to an on-premises management console or Defender for IoT in the Azure portal.
--
-### Purdue model
-
-The following sections describe Purdue levels.
--
-#### Level 0: Cell and area
-
-Level 0 consists of a wide variety of sensors, actuators, and devices involved in the basic manufacturing process. These devices perform the basic functions of the industrial automation and control system, such as:
--- Driving a motor.--- Measuring variables.-- Setting an output.-- Performing key functions, such as painting, welding, and bending.-
-#### Level 1: Process control
-
-Level 1 consists of embedded controllers that control and manipulate the manufacturing process whose key function is to communicate with the Level 0 devices. In discrete manufacturing, those devices are programmable logic controllers (PLCs) or remote telemetry units (RTUs). In process manufacturing, the basic controller is called a distributed control system (DCS).
-
-#### Level 2: Supervisory
-
-Level 2 represents the systems and functions associated with the runtime supervision and operation of an area of a production facility. These usually include the following:
--- Operator interfaces or HMIs--- Alarms or alerting systems--- Process historian and batch management systems--- Control room workstations-
-These systems communicate with the PLCs and RTUs in Level 1. In some cases, they communicate or share data with the site or enterprise (Level 4 and Level 5) systems and applications. These systems are primarily based on standard computing equipment and operating systems (Unix or Microsoft Windows).
-
-#### Levels 3 and 3.5: Site-level and industrial perimeter network
-
-The site level represents the highest level of industrial automation and control systems. The systems and applications that exist at this level manage site-wide industrial automation and control functions. Levels 0 through 3 are considered critical to site operations. The systems and functions that exist at this level might include the following:
--- Production reporting (for example, cycle times, quality index, predictive maintenance)--- Plant historian--- Detailed production scheduling--- Site-level operations management--- Device and material management--- Patch launch server--- File server--- Industrial domain, Active Directory, terminal server-
-These systems communicate with the production zone and share data with the enterprise (Level 4 and Level 5) systems and applications.
-
-#### Levels 4 and 5: Business and enterprise networks
-
-Level 4 and Level 5 represent the site or enterprise network where the centralized IT systems and functions exist. The IT organization directly manages the services, systems, and applications at these levels.
-
-## Planning for network monitoring
-
-The following examples present different types of topologies for industrial control networks, along with considerations for optimal monitoring and placement of sensors.
-
-### What should be monitored?
-
-Traffic that goes through layers 1 and 2 should be monitored.
-
-### What should the Defender for IoT appliance connect to?
-
-The Defender for IoT appliance should connect to the managed switches that see the industrial communications between layers 1 and 2 (in some cases also layer 3).
-
-The following diagram is a general abstraction of a multilayer, multitenant network, with an expansive cybersecurity ecosystem typically operated by an SOC and MSSP.
-
-Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model.
--
-#### Example: Ring topology
-
-The ring network is a topology in which each switch or node connects to exactly two other switches, forming a single continuous pathway for the traffic.
--
-#### Example: Linear bus and star topology
-
-In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches are not monitored, and traffic that remains local to these switches will not be seen. Devices might be identified based on ARP messages, but connection information will be missing.
--
-#### Multisensor deployment
-
-Here are some recommendations for deploying multiple sensors:
-
-| **Number** | **Meters** | **Dependency** | **Number of sensors** |
-|--|--|--|--|
-| The maximum distance between switches | 80 meters | Prepared Ethernet cable | More than 1 |
-| Number of OT networks | More than 1 | No physical connectivity | More than 1 |
-| Number of switches | Can use RSPAN configuration | Up to eight switches with local span close to the sensor by cabling distance | More than 1 |
-
-#### Traffic mirroring
-
-To see only relevant information for traffic analysis, you need to connect the Defender for IoT platform to a mirroring port on a switch or a TAP that includes only industrial ICS and SCADA traffic.
--
-You can monitor switch traffic by using the following methods:
--- [Switch SPAN port](#switch-span-port)--- [Remote SPAN (RSPAN)](#remote-span-rspan)--- [Active and passive aggregation TAP](#active-and-passive-aggregation-tap)-
-SPAN and RSPAN are Cisco terminology. Other brands of switches have similar functionality but might use different terminology.
-
-#### Switch SPAN port
-
-A switch port analyzer mirrors local traffic from interfaces on the switch to interface on the same switch. Here are some considerations:
--- Verify that the relevant switch supports the port mirroring function. --- The mirroring option is disabled by default.--- We recommend that you configure all of the switch's ports, even if no data is connected to them. Otherwise, a rogue device might be connected to an unmonitored port, and it would not be alerted on the sensor.--- On OT networks that utilize broadcast or multicast messaging, configure the switch to mirror only RX (Receive) transmissions. Otherwise, multicast messages will be repeated for as many active ports, and the bandwidth is multiplied.-
-The following configuration examples are for reference only and are based on a Cisco 2960 switch (24 ports) running IOS. They are typical examples only, so don't use them as instructions. Mirror ports on other Cisco operating systems and other brands of switches are configured differently.
--
-##### Monitoring multiple VLANs
-
-Defender for IoT allows monitoring VLANs configured in your network. No configuration of the Defender for IoT system is required. The user needs to ensure that the switch in your network is configured to send VLAN tags to Defender for IoT.
-
-The following example shows the required commands that must be configured on the Cisco switch to enable monitoring VLANs in Defender for IoT:
-
-**Monitor session**: This command is responsible for the process of sending VLANs to the SPAN port.
--- monitor session 1 source interface Gi1/2--- monitor session 1 filter packet type good Rx--- monitor session 1 destination interface fastEthernet1/1 encapsulation dot1q-
-**Monitor Trunk Port F.E. Gi1/1**: VLANs are configured on the trunk port.
--- interface GigabitEthernet1/1--- switchport trunk encapsulation dot1q--- switchport mode trunk-
-#### Remote SPAN (RSPAN)
-
-The remote SPAN session mirrors traffic from multiple distributed source ports into a dedicated remote VLAN.
--
-The data in the VLAN is then delivered through trunked ports across multiple switches to a specific switch that contains the physical destination port. This port connects to the Defender for IoT platform.
-
-##### More about RSPAN
--- RSPAN is an advanced feature that requires a special VLAN to carry the traffic that SPAN monitors between switches. RSPAN is not supported on all switches. Verify that the switch supports the RSPAN function.--- The mirroring option is disabled by default.--- The remote VLAN must be allowed on the trunked port between the source and destination switches.--- All switches that connect the same RSPAN session must be from the same vendor.-
-> [!NOTE]
-> Make sure that the trunk port that's sharing the remote VLAN between the switches is not defined as a mirror session source port.
->
-> The remote VLAN increases the bandwidth on the trunked port by the size of the mirrored session's bandwidth. Verify that the switch's trunk port supports that.
--
-#### RSPAN configuration examples
-
-RSPAN: Based on Cisco catalyst 2960 (24 ports).
-
-**Source switch configuration example:**
-
+### Validate your network
-1. Enter global configuration mode.
+After preparing your network, use the guidance in this section to validate whether you're ready to deploy Defender for IoT.
-1. Create a dedicated VLAN.
+Make an attempt to receive a sample of recorded traffic (PCAP file) from the switch SPAN or mirror port. This sample will:
-1. Identify the VLAN as the RSPAN VLAN.
-
-1. Return to "configure terminal" mode.
-
-1. Configure all 24 ports as session sources.
-
-1. Configure the RSPAN VLAN to be the session destination.
-
-1. Return to privileged EXEC mode.
-
-1. Verify the port mirroring configuration.
-
-**Destination switch configuration example:**
-
-1. Enter global configuration mode.
-
-1. Configure the RSPAN VLAN to be the session source.
-
-1. Configure physical port 24 to be the session destination.
-
-1. Return to privileged EXEC mode.
-
-1. Verify the port mirroring configuration.
-
-1. Save the configuration.
-
-#### Active and passive aggregation TAP
-
-An active or passive aggregation TAP is installed inline to the network cable. It duplicates both RX and TX to the monitoring sensor.
-
-The terminal access point (TAP) is a hardware device that allows network traffic to flow from port A to port B, and from port B to port A, without interruption. It creates an exact copy of both sides of the traffic flow, continuously, without compromising network integrity. Some TAPs aggregate transmit and receive traffic by using switch settings if desired. If aggregation is not supported, each TAP uses two sensor ports to monitor send and receive traffic.
-
-TAPs are advantageous for various reasons. They're hardware-based and can't be compromised. They pass all traffic, even damaged messages, which the switches often drop. They're not processor sensitive, so packet timing is exact where switches handle the mirror function as a low-priority task that can affect the timing of the mirrored packets. For forensic purposes, a TAP is the best device.
-
-TAP aggregators can also be used for port monitoring. These devices are processor-based and are not as intrinsically secure as hardware TAPs. They might not reflect exact packet timing.
--
-##### Common TAP models
-
-These models have been tested for compatibility. Other vendors and models might also be compatible.
-
-| Image | Model |
-|--|--|
-| :::image type="content" source="media/how-to-set-up-your-network/garland-p1gccas-v2.png" alt-text="Screenshot of Garland P1GCCAS."::: | Garland P1GCCAS |
-| :::image type="content" source="media/how-to-set-up-your-network/ixia-tpa2-cu3-v2.png" alt-text="Screenshot of IXIA TPA2-CU3."::: | IXIA TPA2-CU3 |
-| :::image type="content" source="media/how-to-set-up-your-network/us-robotics-usr-4503-v2.png" alt-text="Screenshot of US Robotics USR 4503."::: | US Robotics USR 4503 |
-
-##### Special TAP configuration
-
-| Garland TAP | US Robotics TAP |
-| -- | |
-| Make sure jumpers are set as follows:<br />:::image type="content" source="media/how-to-set-up-your-network/jumper-setup-v2.jpg" alt-text="Screenshot of US Robotics switch.":::| Make sure Aggregation mode is active. |
-
-## Deployment validation
-
-### Engineering self-review
-
-Reviewing your OT and ICS network diagram is the most efficient way to define the best place to connect to, where you can get the most relevant traffic for monitoring.
-
-The site engineers know what their network looks like. Having a review session with the local network and operational teams will usually clarify expectations and define the best place to connect the appliance.
-
-Relevant information:
--- List of known devices (spreadsheet) --- Estimated number of devices --- Vendors and industrial protocols--- Model of switches, to verify that port mirroring option is available--- Information about who manages the switches (for example, IT) and whether they're external resources--- List of OT networks at the site-
-#### Common questions
--- What are the overall goals of the implementation? Are a complete inventory and accurate network map important?--- Are there multiple or redundant networks in the ICS? Are all the networks being monitored?--- Are there communications between the ICS and the enterprise (business) network? Are these communications being monitored?--- Are VLANs configured in the network design?--- How is maintenance of the ICS performed, with fixed or transient devices?--- Where are firewalls installed in the monitored networks?--- Is there any routing in the monitored networks?--- What OT protocols are active on the monitored networks?--- If we connect to this switch, will we see communication between the HMI and the PLCs?--- What is the physical distance between the ICS switches and the enterprise firewall?
+- Validate if the switch is configured properly.
-- Can unmanaged switches be replaced with managed switches, or is the use of network TAPs an option?
+- Confirm if the traffic that goes through the switch is relevant for monitoring (OT traffic).
-- Is there any serial communication in the network? If yes, show it on the network diagram.
+- Identify bandwidth and the estimated number of devices in this switch.
-- If the Defender for IoT appliance should be connected to that switch, is there physical available rack space in that cabinet?
+For example, you can record a sample PCAP file for a few minutes by connecting a laptop to an already configured SPAN port through the Wireshark application.
-#### Other considerations
+**To use Wireshark to validate your network**:
-The purpose of the Defender for IoT appliance is to monitor traffic from layers 1 and 2.
+- Check that *Unicast packets* are present in the recording traffic. Unicast is from one address to another. If most of the traffic is ARP messages, then the switch setup is incorrect.
-For some architectures, the Defender for IoT appliance will also monitor layer 3, if OT traffic exists on this layer. While you're reviewing the site architecture and deciding whether to monitor a switch, consider the following variables:
+- Go to **Statistics** > **Protocol Hierarchy**. Verify that industrial OT protocols are present.
-- What is the cost/benefit versus the importance of monitoring this switch?
+For example:
-- If a switch is unmanaged, will it be possible to monitor the traffic from a higher-level switch?
- If the ICS architecture is a ring topology, only one switch in this ring needs to be monitored.
+## Networking requirements
-- What is the security or operational risk in this network?
+Use the following tables to ensure that required firewalls are open on your workstation and verify that your organization security policy allows required access.
+### User access to the sensor and management console
-- Is it possible to monitor the switch's VLAN? Is that VLAN visible in another switch that we can monitor?
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| SSH | TCP | In/Out | 22 | CLI | To access the CLI. | Client | Sensor and on-premises management console |
+| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console. | Access to Web console | Client | Sensor and on-premises management console |
-#### Technical validation
+### Sensor access to Azure portal
-Receiving a sample of recorded traffic (PCAP file) from the switch SPAN (or mirror) port can help to:
+| Protocol | Transport | In/Out | Port | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net` |
-- Validate if the switch is configured properly.
+### Sensor access to the on-premises management console
-- Confirm if the traffic that goes through the switch is relevant for monitoring (OT traffic).
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console. | Sensor | On-premises management console |
+| SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
-- Identify bandwidth and the estimated number of devices in this switch.
+### Other firewall rules for external services (optional)
-You can record a sample PCAP file (a few minutes) by connecting a laptop to an already configured SPAN port through the Wireshark application.
+Open these ports to allow extra services for Defender for IoT.
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events. | Sensor and On-premises management console | Email server |
+| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port. | On-premises management console and Sensor | DNS server |
+| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server |
+| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring. | Sensor | Relevant network element |
+| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health. | On-premises management console and Sensor | SNMP server |
+| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system. | On-premises management console and Sensor | LDAP server |
+| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server |
+| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server. | On-premises management console and Sensor | Syslog server |
+| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system. | On-premises management console and Sensor | LDAPS server |
+| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
-#### Wireshark validation
+## Choose a cloud connection method
-- Check that *Unicast packets* are present in the recording traffic. Unicast is from one address to another. If most of the traffic is ARP messages, then the switch setup is incorrect.
+If you're setting up OT sensors and connecting them to the cloud, understand supported cloud connection methods, and make sure to connect your sensors as needed.
-- Go to **Statistics** > **Protocol Hierarchy**. Verify that industrial OT protocols are present.
+For more information, see:
+- [OT sensor cloud connection methods](architecture-connections.md)
+- [Connect your OT sensors to the cloud](connect-sensors.md)
## Troubleshooting
-Use these sections for troubleshooting issues:
--- [Can't connect by using a web interface](#cant-connect-by-using-a-web-interface)--- [Appliance is not responding](#appliance-is-not-responding)
+This section provides troubleshooting for common issues when preparing your network for a Defender for IoT deployment.
### Can't connect by using a web interface
Use these sections for troubleshooting issues:
2. Verify that the GUI network is connected to the management port on the sensor.
-3. Ping the appliance IP address. If there is no response to ping:
+3. Ping the appliance IP address. If there's no response to ping:
1. Connect a monitor and a keyboard to the appliance. 1. Use the **support** user and password to sign in.
- 1. Use the command **network list** to see the current IP address.
+ 1. Use the command **network list** to see the current IP address. For example:
- :::image type="content" source="media/how-to-set-up-your-network/list-of-network-commands.png" alt-text="Screenshot of the network list command.":::
+ :::image type="content" source="media/how-to-set-up-your-network/list-of-network-commands.png" alt-text="Screenshot of the network list command.":::
4. If the network parameters are misconfigured, use the following procedure to change it:
Use these sections for troubleshooting issues:
6. Try to ping and connect from the GUI again.
-### Appliance is not responding
+### Appliance isn't responding
1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
Use these sections for troubleshooting issues:
:::image type="content" source="media/how-to-set-up-your-network/system-sanity-command.png" alt-text="Screenshot of the system sanity command.":::
-For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-
-## Predeployment checklist
-
-Use the predeployment checklist to retrieve and review important information that you need for network setup.
-
-### Site checklist
-
-Review this list before site deployment:
-
-| **#** | **Task or activity** | **Status** | **Comments** |
-|--|--|--|--|
-| 1 | Order appliances. | ΓÿÉ | |
-| 2 | Prepare a list of subnets in the network. | ΓÿÉ | |
-| 3 | Provide a VLAN list of the production networks. | ΓÿÉ | |
-| 4 | Provide a list of switch models in the network. | ΓÿÉ | |
-| 5 | Provide a list of vendors and protocols of the industrial equipment. | ΓÿÉ | |
-| 6 | Provide network details for sensors (IP address, subnet, D-GW, DNS). | ΓÿÉ | |
-| 7 | Third-party switch management | ΓÿÉ | |
-| 8 | Create necessary firewall rules and the access list. | ΓÿÉ | |
-| 9 | Create spanning ports on switches for port monitoring, or configure network taps as desired. | ΓÿÉ | |
-| 10 | Prepare rack space for sensor appliances. | ΓÿÉ | |
-| 11 | Prepare a workstation for personnel. | ΓÿÉ | |
-| 12 | Provide a keyboard, monitor, and mouse for the Defender for IoT rack devices. | ΓÿÉ | |
-| 13 | Rack and cable the appliances. | ΓÿÉ | |
-| 14 | Allocate site resources to support deployment. | ΓÿÉ | |
-| 15 | Create Active Directory groups or local users. | ΓÿÉ | |
-| 16 | Set up training (self-learning). | ΓÿÉ | |
-| 17 | Go or no-go. | ΓÿÉ | |
-| 18 | Schedule the deployment date. | ΓÿÉ | |
--
-| **Date** | **Note** | **Deployment date** | **Note** |
-|--|--|--|--|
-| Defender for IoT | | Site name* | |
-| Name | | Name | |
-| Position | | Position | |
-
-#### Architecture review
-
-An overview of the industrial network diagram will allow you to define the proper location for the Defender for IoT equipment.
-
-1. **Global network diagram** - View a global network diagram of the industrial OT environment. For example:
-
- :::image type="content" source="media/how-to-set-up-your-network/backbone-switch.png" alt-text="Diagram of the industrial OT environment for the global network.":::
-
- > [!NOTE]
- > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-
-1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
-
-1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
-
- | **#** | **Subnet name** | **Description** |
- |--| | |
- | 1 | |
- | 2 | |
- | 3 | |
- | 4 | |
-
-1. **VLANs** - Provide a VLAN list of the production networks.
-
- | **#** | **VLAN Name** | **Description** |
- |--|--|--|
- | 1 | | |
- | 2 | | |
- | 3 | | |
- | 4 | | |
-
-1. **Switch models and mirroring support** - To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to:
-
- | **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** |
- |--|--|--|--|
- | 1 | | |
- | 2 | | |
- | 3 | | |
- | 4 | | |
-
-1. **Third-party switch management** - Does a third party manage the switches? Y or N
-
- If yes, who? __________________________________
-
- What is their policy? __________________________________
-
- For example:
-
- - Siemens
-
- - Rockwell automation ΓÇô Ethernet or IP
-
- - Emerson ΓÇô DeltaV, Ovation
-
-1. **Serial connection** - Are there devices that communicate via a serial connection in the network? Yes or No
-
- If yes, specify which serial communication protocol: ________________
-
- If yes, mark on the network diagram what devices communicate with serial protocols, and where they are:
-
- *Add your network diagram with marked serial connection*
-
-1. **Quality of Service** - For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
-
- Business unit (BU): ________________
-
-1. **Sensor** - Specifications for site equipment
-
- The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
-
- Provide address details for the sensor NIC that will be connected in the corporate network:
-
- | Item | Appliance 1 | Appliance 2 | Appliance 3 |
- |--|--|--|--|
- | Appliance IP address | | | |
- | Subnet | | | |
- | Default gateway | | | |
- | DNS | | | |
- | Host name | | | |
-
-1. **iDRAC/iLO/Server management**
-
- | Item | Appliance 1 | Appliance 2 | Appliance 3 |
- |--|--|--|--|
- | Appliance IP address | | | |
- | Subnet | | | |
- | Default gateway | | | |
- | DNS | | | |
-
-1. **On-premises management console**
-
- | Item | Active | Passive (when using HA) |
- |--|--|--|
- | IP address | | |
- | Subnet | | |
- | Default gateway | | |
- | DNS | | |
-
-1. **SNMP**
-
- | Item | Details |
- |--|--|
- | IP | |
- | IP address | |
- | Username | |
- | Password | |
- | Authentication type | MD5 or SHA |
- | Encryption | DES or AES |
- | Secret key | |
- | SNMP v2 community string |
-
-1. **On-premises management console SSL certificate**
-
- Are you planning to use an SSL certificate? Yes or No
-
- If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
-
-1. **SMTP authentication**
-
- Are you planning to use SMTP to forward alerts to an email server? Yes or No
-
- If yes, what authentication method you will use?
-
-1. **Active Directory or local users**
-
- Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
-
-1. IoT device types in the network
-
- | Device type | Number of devices in the network | Average bandwidth |
- | | | -- |
- | Camera | |
- | X-ray machine | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
+For any other issues, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
## Next steps
-[About the Defender for IoT installation](how-to-install-software.md)
+For more information, see:
+
+- [Predeployment checklist](pre-deployment-checklist.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
+- [Defender for IoT installation](how-to-install-software.md)
+- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+- [Microsoft Defender for IoT system architecture](architecture.md)
+- [Sensor connection methods](architecture-connections.md)
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
# Threat intelligence research and packages # ## Overview ##
-Security teams in Microsoft carry out proprietary ICS threat intelligence and vulnerability research. These teams include MSTIC (Microsoft Threat Intelligence Center), DART (Microsoft Detection and Response Team), DCU (Digital Crimes Unit), and Section 52 (IoT/OT/ICS domain experts that track ICS-specific zero-days, reverse-engineering malware, campaigns, and adversaries)
+Security teams at Microsoft carry out proprietary ICS threat intelligence and vulnerability research. These teams include MSTIC (Microsoft Threat Intelligence Center), DART (Microsoft Detection and Response Team), DCU (Digital Crimes Unit), and Section 52 (IoT/OT/ICS domain experts that track ICS-specific zero-days, reverse-engineering malware, campaigns, and adversaries)
The teams provide security detection, analytics, and response to Microsoft's:
You can also see the most current package delivered from the **Threat intelligen
Three options are available for updating threat intelligence packages to your sensors: -- Automatically push packages to sensors as they are delivered by Defender for IoT.
+- Automatically push packages to sensors as they're delivered by Defender for IoT.
- Manually push threat intelligence package to sensors as required. - Download a package and then upload it to a sensor or multiple sensors.
Users with Defender for IoT Security Reader permissions can automatically and ma
### Automatically push threat intelligence updates to sensors ###
-Threat intelligence packages can be automatically updated to *cloud connected* sensors as they are released by Defender for IoT. Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](getting-started.md#onboard-a-sensor).
+Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT. Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor).
### Manually push threat intelligence updates to sensors ###
-Your *cloud connected* sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it is required. This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors.
+Your *cloud connected* sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it's required. This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors.
**To manually push packages:**
To review threat intelligence information:
1. Go to the Microsoft Defender for IoT **Sites and Sensors** page. 1. Review the **Threat Intelligence version** installed on each sensor. Version naming is based on the day the package was built by Defender for IoT.
-1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they are released by Defender for IoT. *Manual* indicates that you can push newly available packages directly to sensors as needed.
+1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT. *Manual* indicates that you can push newly available packages directly to sensors as needed.
1. Review the **Threat Intelligence update status**. The following statuses may be displayed: - Failed
If cloud connected threat intelligence updates fail, review connection informa
For more information, see: -- [Onboard a sensor](getting-started.md#onboard-a-sensor)
+- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor)
- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot Overview Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview-eiot.md
- Title: Enterprise IoT network protection
-description: This article discusses Microsoft Defender for IoT features and services and how it helps provide comprehensive IoT security for enterprise IoT networks.
- Previously updated : 12/27/2021--
-# Enterprise IoT network protection
-
-The Microsoft Defender for IoT team is responsible for securing IoT and operational technology (OT) devices end-to-end, whether they're connected to IT, OT, or dedicated IoT networks.
-
-Enterprise IoT network protection extends agentless capabilities beyond operational environments and into enterprise environments. This protection provides coverage for the entire breadth of IoT devices in these environments, devices that include corporate printers, cameras, and purpose-built or proprietary devices.
-
-The expansion of IoT into the enterprise network creates a unique opportunity to apply the asset-discovery capabilities of Microsoft 365 Defender.
--
-## Integration with Microsoft Defender for Endpoint
-
-You can now integrate Defender for IoT with Defender for Endpoint. When you do so, you're combining Defender for Endpoint device-discovery and agentless monitoring capabilities to help secure enterprise IoT devices that are connected to an IT network. Two examples of such devices are Voice over Internet Protocol (VoIP) phones and smart TVs. The result of this integration is a single, integrated solution that helps secure your entire IoT and OT infrastructure.
-
-With this integration, you can use Defender for IoT sensors as additional data sources. Defender for IoT sensors provide visibility into areas of your network where Defender for Endpoint is not deployed and employees need to access data remotely. These sensors also provide visibility into IoT-to-IoT and IoT-to-internet communications.
-
-After you've enabled this integration, any devices that are discovered on the network by either Defender for Iot or Defender for Endpoint will be synced automatically across both portals.
-
-To learn how to integrate Defender for Endpoint with your Defender for IoT solution, see the Microsoft Defender for Endpoint documentation.
-
-## Automatic discovery for IT, IoT, and OT
-
-Use passive, agentless network monitoring to gain insight into your entire inventory of IT, IoT, and OT devices. The discovery process continuously identifies and classifies devices in your network, and it resolves all device details, with zero effect on the network.
-
-## Single pane of glass
-
-A centralized user experience lets your security team visualize and secure all IT, IoT, and OT devices, no matter where they're located.
-
-## The power of unified SIEM and XDR
-
-Defender for IoT shares its high-resolution signal data with Microsoft Defender 365 and Microsoft Sentinel. This sharing accelerates incident response and provides a bird's eye view across IT, OT, and IoT boundaries.
-
-Defender for IoT shares rich telemetry data seamlessly with security information and event management (SIEM) and extended detection and response (XDR) platforms, such as Microsoft Sentinel and Microsoft 365 Defender. It also interoperates with other Security Operations Center (SOC) tools, such as Splunk, IBM QRadar, and ServiceNow. Azure Sentinel customers no longer need to switch consoles to put the story together.
-
-## Easy deployment for a scalable solution
-
-Defender for IoT helps ensure a quick, frictionless deployment of network sensor appliances, both physical and virtual.
-
-If you experience any issues, we encourage you to contact our customer support team.
-
-## Next steps
-
-For more information, see [Tutorial: Get started with enterprise IoT](tutorial-getting-started-eiot-sensor.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Title: Overview for OT networks
-description: Learn more about Defender for IoT features and services, and understand how Defender for IoT provides comprehensive IoT security for OT networks.
+ Title: Overview - Microsoft Defender for IoT for organizations
+description: Learn about Microsoft Defender for IoT's features for end-user organizations and comprehensive IoT security for OT and Enterprise IoT networks.
Previously updated : 11/09/2021 Last updated : 03/23/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-Operational technology (OT) networks power many of the most critical aspects of our society. But many of these technologies were not designed with security in mind and can't be protected with traditional IT security controls. Meanwhile, the Internet of Things (IoT) is enabling a new wave of innovation with billions of connected devices, increasing the attack surface and risk.
+The Internet of Things (IoT) supports billions of connected devices that use operational technology (OT) networks. IoT/OT devices and networks are often designed without security in priority, and therefore can't be protected by traditional systems. With each new wave of innovation, the risk to IoT devices and OT networks increases the possible attack surfaces.
-Microsoft Defender for IoT is a unified security solution for identifying IoT/OT devices, vulnerabilities, and threats. It enables you to secure your entire IoT/OT environment, whether you need to protect existing IoT/OT devices or build security into new IoT innovations.
+Microsoft Defender for IoT is a unified security solution for identifying IoT and OT devices, vulnerabilities, and threats and managing them through a central interface. This set of documentation describes how end-user organizations can secure their entire IoT/OT environment, including protecting existing devices or building security into new IoT innovations.
-Microsoft Defender for IoT offers two sets of capabilities to fit your environment's needs.
-For end-user organizations with IoT/OT environments, Microsoft Defender for IoT delivers agentless, network-layer monitoring that:
+**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments or completely on-premises.
-- Can be rapidly deployed.-- Integrates easily with diverse industrial equipment and SOC tools.-- Has zero impact on IoT/OT network performance or stability.
+**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](/device-builders/index.md).
-The platform can be deployed fully on-premises or in Azure-connected and hybrid environments.
+## Agentless device monitoring
-For IoT device builders, Microsoft Defender for IoT also offers lightweight a micro agent that supports standard IoT operating systems, such as Linux and RTOS. This lightweight agent helps ensure that security is built into your IoT/OT initiatives from the edge to the cloud. It includes source code for flexible, customizable deployment.
+Many legacy IoT and OT devices don't support agents, and can therefore remain unpatched, misconfigured, and invisible to IT teams. These devices become soft targets for threat actors who want to pivot deeper into corporate networks.
-## Agentless solution
+Agentless monitoring in Defender for IoT provides visibility and security into networks that can't be covered by traditional network security monitoring tools and may lack understanding of specialized protocols, devices, and relevant machine-to-machine (M2M) behaviors.
-Older IoT, and OT devices don't support agents, and are often unpatched, misconfigured, and invisible to IT teams. Those qualities make them soft targets for threat actors who want to pivot deeper into corporate networks.
+- **Discover IoT/OT devices** in your network, their details, and how they communicate. Gather data from network sensors, Microsoft Defender for end-point, and third-party sources.
-Traditional network security monitoring tools developed for corporate IT networks can't address these environments because they lack a deep understanding of the specialized protocols, devices, and machine-to-machine (M2M) behaviors found in IoT and OT environments.
+- **Assess risks and manage vulnerabilities** using machine learning, threat intelligence, and behavioral analytics. For example:
-The agentless monitoring capabilities in Microsoft Defender for IoT give you visibility and security for these networks. You can then address key concerns for these environments.
+ - Identify unpatched devices, open ports, unauthorized applications, unauthorized connections, changes to device configurations, PLC code, and firmware, and more.
-### Automatic device discovery
+ - Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further.
-Use passive, agentless network monitoring to gain a complete inventory of all your IoT/OT devices, their details, and how they communicate, with zero impact on the IoT/OT network.
+ - Detect advanced threats that you may have missed by static IOCs, such as zero-day malware, fileless malware, and living-off-the-land tactics.
-### Proactive visibility into risk and vulnerabilities
+- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, and third-party systems and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), and extended detection and response (XDR) services, and more.
-Identify risks and vulnerabilities in your IoT/OT environment. For example, identify unpatched devices, open ports, unauthorized applications, and unauthorized connections. You can also identify changes to device configurations, PLC code, and firmware.
+A centralized user experience lets the security team visualize and secure all their IT, IoT, and OT devices regardless of where the devices are located.
-### IoT/OT threat detection
+## Support for cloud, on-premises, and hybrid networks
-Detect anomalous or unauthorized activities with specialized IoT/OT-aware threat intelligence and behavioral analytics. You can even detect advanced threats missed by static IOCs, like zero-day malware, fileless malware, and living-off-the-land tactics.
+Defender for IoT can support various network configurations:
-### Unified security management across IoT/OT
+- **Cloud**. Extend your journey to the cloud by having your data delivered to Azure, where you can visualize data from a central location and also share data with other Microsoft services for end-to-end security monitoring and response.
-Integrate into Microsoft Sentinel for a bird's-eye view of your entire organization. Implement unified IoT/OT security governance with integration into your existing workflows, including third-party tools like Splunk, IBM QRadar, and ServiceNow.
+- **On-premises**. For example, in air-gapped environments, you might want to keep all of your data fully on-premises. Use the data provided by each sensor and the central visualizations provided by an on-premises management console to ensure security on your network.
+
+- **Hybrid**. If you have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises only, set up your system in a flexible and scalable configuration that fits your needs.
+
+Regardless of configuration, data detected by a specific sensor is also always available in the sensor console.
+
+## Extend support to proprietary protocols
+
+IoT and ICS devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. Use the [Horizon Open Development Environment (ODE) SDK](references-horizon-sdk.md) to develop dissector plug-ins that decode network traffic, regardless of protocol type.
+
+For example, in an environment running MODBUS, you might want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address. Alerts are triggered when Horizon alert rule conditions are met.
+
+Use custom, condition-based alert triggering and messaging to help pinpoint specific network activity and effectively update your security, IT, and operational teams.
+
+For more information, see [Horizon proprietary protocol dissector](references-horizon-sdk.md) and [Supported Protocols](concept-supported-protocols.md).
++
+## Extend Defender for IoT to enterprise networks
+
+Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
+
+Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, unique devices.
+
+When you expand Microsoft Defender for IoT into the enterprise network, you can apply Microsoft 365 Defender's features for asset discovery and use Microsoft Defender for Endpoint for a single, integrated package that can secure all of your IoT/OT infrastructure.
+
+Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in area's of your organizations network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any devices discovered on the network by either service.
+
+For more information, see the [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint).
## Next steps
-For more information, see [Microsoft Defender for IoT architecture](architecture.md).
+For more information, see:
+
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
defender-for-iot Plan Network Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/plan-network-monitoring.md
+
+ Title: OT network monitoring best practices for Microsoft Defender for IoT
+description: Learn about best practices for planning your OT network monitoring with Microsoft Defender for IoT.
+ Last updated : 03/27/2022++
+# Best practices for planning your OT network monitoring
+
+This article reviews best practices that we recommend following when planning your OT network monitoring with Microsoft Defender for IoT.
+
+Review these best practices when planning your network. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md) and [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md).
+
+## Understand your network architecture
+
+When planning your network monitoring, you must understand your system network architecture and how it will need to connect to Defender for IoT. Also, understand where each of your system elements falls in the Purdue Reference model for Industrial Control System (ICS) OT network segmentation.
+
+Defender for IoT network sensors receive traffic from multiple sources, either by switch mirror ports (SPAN ports) or network TAPs. The network sensor's management port connects to the business, corporate, or sensor management network for network management from the Azure portal or an on-premises management system.
+
+For example:
++
+### Purdue reference model and Defender for IoT
+
+The Purdue Reference Model is a model for Industrial Control System (ICS)/OT network segmentation that defines six layers, components and relevant security controls for those networks.
+
+Each device type in your OT network falls in a specific level of the Purdue model. The following image shows how devices in your network spread across the Purdue model and connect to Defender for IoT services.
++
+The following table describes each level of the Purdue model when applied to Defender for IoT devices:
+
+|Name |Description |
+|||
+|**Level 0**: Cell and area | Level 0 consists of a wide variety of sensors, actuators, and devices involved in the basic manufacturing process. These devices perform the basic functions of the industrial automation and control system, such as: <br><br>- Driving a motor<br>- Measuring variables<br>- Setting an output<br>- Performing key functions, such as painting, welding, and bending |
+| **Level 1**: Process control | Level 1 consists of embedded controllers that control and manipulate the manufacturing process whose key function is to communicate with the Level 0 devices. In discrete manufacturing, those devices are programmable logic controllers (PLCs) or remote telemetry units (RTUs). In process manufacturing, the basic controller is called a distributed control system (DCS). |
+|**Level 2**: Supervisory | Level 2 represents the systems and functions associated with the runtime supervision and operation of an area of a production facility. These usually include the following: <br><br>- Operator interfaces or human-machine interfaces (HMIs) <br>- Alarms or alerting systems <br> - Process historian and batch management systems <br>- Control room workstations <br><br>These systems communicate with the PLCs and RTUs in Level 1. In some cases, they communicate or share data with the site or enterprise (Level 4 and Level 5) systems and applications. These systems are primarily based on standard computing equipment and operating systems (Unix or Microsoft Windows). |
+|**Levels 3 and 3.5**: Site-level and industrial perimeter network | The site level represents the highest level of industrial automation and control systems. The systems and applications that exist at this level manage site-wide industrial automation and control functions. Levels 0 through 3 are considered critical to site operations. The systems and functions that exist at this level might include the following: <br><br>- Production reporting (for example, cycle times, quality index, predictive maintenance) <br>- Plant historian <br>- Detailed production scheduling<br>- Site-level operations management <br>-0 Device and material management <br>- Patch launch server <br>- File server <br>- Industrial domain, Active Directory, terminal server <br><br>These systems communicate with the production zone and share data with the enterprise (Level 4 and Level 5) systems and applications. |
+|**Levels 4 and 5**: Business and enterprise networks | Level 4 and Level 5 represent the site or enterprise network where the centralized IT systems and functions exist. The IT organization directly manages the services, systems, and applications at these levels. |
+
+## Plan your sensor connections
+
+We recommend that Defender for IoT monitors traffic from Purdue layers 1 and 2. For some architectures, if OT traffic exists on layer 3, Defender for IoT will also monitor layer 3 traffic.
+
+While you're reviewing your site architecture to determine whether or not to monitor a specific switch, considering the following questions:
+
+- What is the cost/benefit versus the importance of monitoring this switch?
+- If a switch is unmanaged, can you monitor the traffic from a higher-level switch? If the ICS architecture is a [ring topology](#sample-ring-topology), only one switch in the ring needs monitoring.
+- What is the security or operational risk in the network?
+- Can you monitor the switch's VLAN? Is the VLAN visible in another switch that you can monitor?
+
+Review your OT and ICS network diagram together with your site engineers to define the best place to connect to Defender for IoT, and where you can get the most relevant traffic for monitoring. We recommend that you meet with the local network and operational teams to clarify expectations. Create lists of the following data about your network:
+
+- Known devices
+- Estimated number of devices
+- Vendors and industrial protocols
+- Switch models and whether they support port mirroring
+- Switch managers, including external resources
+- OT networks on your site
+
+For more information, see [Sample: Multi-layer, multi-tenant network](#sample-multi-layer-multi-tenant-network) and [More questions for planning your network connections](#more-questions-for-planning-your-network-connections).
++
+## Multi-sensor deployments
+
+The following table lists best practices when deploying multiple Defender for IoT sensors:
+
+| **Number** | **Meters** | **Dependency** | **Number of sensors** |
+|--|--|--|--|
+| The maximum distance between switches | 80 meters | Prepared Ethernet cable | More than 1 |
+| Number of OT networks | More than 1 | No physical connectivity | More than 1 |
+| Number of switches | Can use RSPAN configuration | Up to eight switches with local span close to the sensor by cabling distance | More than 1 |
+
+## Traffic mirroring
+
+To see only relevant information for traffic analysis, you need to connect the Defender for IoT platform to a mirroring port on a switch or a TAP that includes only industrial ICS and SCADA traffic.
+
+For example:
++
+You can monitor switch traffic using a switch SPAN port, by report SPAN (RSPAN), or active and passive aggregation TAP. Use the following tabs to learn more about each method.
+
+> [!NOTE]
+> SPAN and RSPAN are Cisco terminology. Other brands of switches have similar functionality but might use different terminology.
+>
+
+# [Switch SPAN port](#tab/switch-span-port)
+
+A switch port analyzer mirrors local traffic from interfaces on the switch to interface on the same switch. Considerations for switch SPAN ports include:
+
+- Verify that the relevant switch supports the port mirroring function.
+
+- The mirroring option is disabled by default.
+
+- We recommend that you configure all of the switch's ports, even if no data is connected to them. Otherwise, a rogue device might be connected to an unmonitored port, and it wouldn't be alerted on the sensor.
+
+- On OT networks that utilize broadcast or multicast messaging, configure the switch to mirror only RX (Receive) transmissions. Otherwise, multicast messages will be repeated for as many active ports, and the bandwidth is multiplied.
+
+For example, use the following configurations to set up a switch SPAN port for a Cisco 2960 switch with 24 ports running IOS.
+
+> [!NOTE]
+> The configuration samples below are intended only as guidance and not as instructions. Mirror ports on other Cisco operating systems and other switch brands are configured differently.
+
+**On a SPAN port configuration terminal**:
+
+```cli
+Cisco2960# configure terminal
+Cisco2960(config)# monitor session 1 source interface fastehernet 0/2 - 23 rx
+Cisco2960(config)# monitor session 1 destination interface fastethernet 0/24
+Cisco2960(config)# end
+Cisco2960# show monitor 1
+Cisco2960# running-copy startup-config
+```
+
+**In the configuration user interface**
+
+1. Enter global configuration mode
+1. Configure first 23 ports as session source (mirror only RX packets)
+1. Configure port 24 to be a session destination
+1. Return to privileged EXEC mode
+1. Verify the port mirroring configuration
+1. Save the configuration
+
+#### Monitoring multiple VLANs
+
+Defender for IoT allows monitoring VLANs configured in your network without any extra configuration, as long as the network switch is configured to send VLAN tags to Defender for IoT.
+
+For example, the following commands must be configured on a Cisco switch to support monitoring VLANs in Defender for IoT:
+
+**Monitor session**: This command is responsible for the process of sending VLANs to the SPAN port.
+
+```cli
+monitor session 1 source interface Gi1/2
+monitor session 1 filter packet type good Rx
+monitor session 1 destination interface fastEthernet1/1 encapsulation dot1q
+```
+
+**Monitor Trunk Port F.E. Gi1/1**: VLANs are configured on the trunk port.
+
+```cli
+interface GigabitEthernet1/1
+switchport trunk encapsulation dot1q
+switchport mode trunk
+```
+
+# [Remote SPAN (RSPAN)](#tab/rspan)
+
+A remote SPAN (RSPAN) session mirrors traffic from multiple distributed source ports into a dedicated remote VLAN. The data in the VLAN is then delivered through trunked ports across multiple switches to a specific switch that contains the physical destination port. This port connects to the Defender for IoT platform.
+
+Consider the following when configuring RSPAN:
+
+- RSPAN is an advanced feature that requires a special VLAN to carry the traffic that SPAN monitors between switches. Make sure that your switch supports RSPAN.
+- The mirroring option is disabled by default.
+- The remote VLAN must be allowed on the trunked port between the source and destination switches.
+- All switches that connect the same RSPAN session must be from the same vendor.
+- Make sure that the trunk port that's sharing the remote VLAN between the switches isn't defined as a mirror session source port.
+- The remote VLAN increases the bandwidth on the trunked port by the size of the mirrored session's bandwidth. Verify that the switch's trunk port supports the increased bandwidth.
+
+The following diagram shows an example of a remote VLAN architecture:
++
+For example, use the following steps to set up an RSPAN for a Cisco 2960 switch with 24 ports running IOS.
+
+**To configure the source switch**:
+
+1. Enter global configuration mode.
+
+1. Create a dedicated VLAN.
+
+1. Identify the VLAN as the RSPAN VLAN.
+
+1. Return to "configure terminal" mode.
+
+1. Configure all 24 ports as session sources.
+
+1. Configure the RSPAN VLAN to be the session destination.
+
+1. Return to privileged EXEC mode.
+
+1. Verify the port mirroring configuration.
+
+**To configure the destination switch**:
+
+1. Enter global configuration mode.
+
+1. Configure the RSPAN VLAN to be the session source.
+
+1. Configure physical port 24 to be the session destination.
+
+1. Return to privileged EXEC mode.
+
+1. Verify the port mirroring configuration.
+
+1. Save the configuration.
+
+# [Active and passive aggregation (TAP)](#tab/TAP)
+
+An active or passive aggregation TAP is installed inline to the network cable and duplicates both RX and TX to the monitoring sensor.
+
+The terminal access point (TAP) is a hardware device that allows network traffic to flow from port A to port B, and from port B to port A, without interruption. It creates an exact copy of both sides of the traffic flow, continuously, without compromising network integrity. Some TAPs aggregate transmit and receive traffic by using switch settings if desired. If aggregation isn't supported, each TAP uses two sensor ports to monitor send and receive traffic.
+
+The advantages of TAPs include:
+
+- TAPs are hardware-based and can't be compromised
+- TAPs pass all traffic, even damaged messages, which the switches often drop
+- TAPs aren't processor sensitive, so packet timing is exact where switches handle the mirror function as a low-priority task that can affect the timing of the mirrored packets
+
+For forensic purposes, a TAP is the best device.
+
+TAP aggregators can also be used for port monitoring. These devices are processor-based and aren't as intrinsically secure as hardware TAPs, and therefore might not reflect exact packet timing.
+
+The following diagram shows an example of a network setup with an active and passive TAP:
++
+#### Common TAP models
+
+The following TAP models have been tested for compatibility with Defender for IoT. Other vendors and models might also be compatible.
+
+- **Garland P1GCCAS**
+
+ :::image type="content" source="media/how-to-set-up-your-network/garland-p1gccas-v2.png" alt-text="Screenshot of Garland P1GCCAS." border="false":::
+
+ When using a Garland TAP, make sure jumpers are set as follows:
+
+ :::image type="content" source="media/how-to-set-up-your-network/jumper-setup-v2.jpg" alt-text="Screenshot of US Robotics switch.":::
+
+- **IXIA TPA2-CU3**
+
+ :::image type="content" source="media/how-to-set-up-your-network/ixia-tpa2-cu3-v2.png" alt-text="Screenshot of IXIA TPA2-CU3." border="false":::
+
+- **US Robotics USR 4503**
+
+ :::image type="content" source="media/how-to-set-up-your-network/us-robotics-usr-4503-v2.png" alt-text="Screenshot of US Robotics USR 4503.":::
+
+ When using a US Robotics TAP, make sure **Aggregation mode** is active.
+++
+## Sample connectivity models
+
+This section provides sample network models for Defender for IoT sensor connections.
+
+### Sample: Ring topology
+
+The following diagram shows an example of a ring network topology, in which each switch or node connects to exactly two other switches, forming a single continuous pathway for the traffic.
++
+### Sample: Linear bus and star topology
+
+In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing.
++
+### Sample: Multi-layer, multi-tenant network
+
+The following diagram is a general abstraction of a multilayer, multitenant network, with an expansive cybersecurity ecosystem typically operated by an SOC and MSSP.
+
+Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model.
+++
+## More questions for planning your network connections
+
+This section lists more, common questions to consider when planning your network connections to Defender for IoT:
+
+- What are the overall goals of the implementation? Are a complete inventory and accurate network map important?
+
+- Are there multiple or redundant networks in the ICS? Are all the networks being monitored?
+
+- Are there communications between the ICS and the enterprise (business) network? Are these communications being monitored?
+
+- Are VLANs configured in the network design?
+
+- How is maintenance of the ICS performed, with fixed or transient devices?
+
+- Where are firewalls installed in the monitored networks?
+
+- Is there any routing in the monitored networks?
+
+- What OT protocols are active on the monitored networks?
+
+- If we connect to this switch, will we see communication between the HMI and the PLCs?
+
+- What is the physical distance between the ICS switches and the enterprise firewall?
+
+- Can unmanaged switches be replaced with managed switches, or is the use of network TAPs an option?
+
+- Is there any serial communication in the network? If yes, show it on the network diagram.
+
+- If the Defender for IoT appliance should be connected to that switch, is there physical available rack space in that cabinet?
+
+## Next steps
+
+For more information, see:
+
+- [Welcome to Microsoft Defender for IoT for organizations](overview.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
defender-for-iot Pre Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/pre-deployment-checklist.md
+
+ Title: OT network pre-deployment checklist
+description: Use this checklist as a worksheet to ensure that your OT network is ready for a Microsoft Defender for IoT deployment.
Last updated : 02/22/2022++++
+# Predeployment checklist
+
+Use this checklist as a worksheet to ensure that your OT network is ready for a Microsoft Defender for IoT deployment.
+
+We recommend printing this browser page or using the print function to save it as a PDF file where you can check off things as you go. For example, on Windows machines, press **CTRL+P** to access the Print dialog for this page.
+
+Use this checklist together with [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md).
+
+## Site checklist
+
+Review the following items before deploying your site:
+
+| **#** | **Task or activity** | **Status** | **Comments** |
+|--|--|--|--|
+| 1 | If you're using physical appliances, order your appliances. <br>For more information, see [Identify required appliances](how-to-identify-required-appliances.md). | ΓÿÉ | |
+| 2 | Identify the managed switches you want to monitor | ΓÿÉ | |
+| 3 | Provide network details for sensors (IP address, subnet, D-GW, DNS, host). | ΓÿÉ | |
+| 4 | Create necessary firewall rules and the access list. For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements).| ΓÿÉ | |
+| 5 | Configure port mirroring, defining the *source* as the physical ports or VLANs you want to monitor, and the *destination* as the output port that connected to OT sensor. | ΓÿÉ | |
+| 7 | Connect the switch to the OT sensor. | ΓÿÉ | |
+| 8 | Create Active Directory groups or local users. | ΓÿÉ | |
+| 9 | On the Azure portal, add a Defender for IoT subscription and an OT sensor and then activate your sensor. | ΓÿÉ | |
+| 10 | Validate the link and incoming traffic to the OT sensor | ΓÿÉ | |
++
+| **Date** | **Note** | **Deployment date** | **Note** |
+|--|--|--|--|
+| Defender for IoT | | Site name* | |
+| Name | | Name | |
+| Position | | Position | |
+
+## Architecture review
+
+Review your industrial network architecture to define the proper location for the Defender for IoT equipment.
+
+1. **Global network diagram** - View a global network diagram of the industrial OT environment. For example:
+
+ :::image type="content" source="media/how-to-set-up-your-network/backbone-switch.png" alt-text="Diagram of the industrial OT environment for the global network.":::
+
+ > [!NOTE]
+ > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
+
+1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 1000.
+
+1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
+
+ | **#** | **Subnet name** | **Description** |
+ |--| | |
+ | 1 | |
+ | 2 | |
+ | 3 | |
+ | 4 | |
+
+1. **VLANs** - Provide a VLAN list of the production networks.
+
+ | **#** | **VLAN Name** | **Description** |
+ |--|--|--|
+ | 1 | | |
+ | 2 | | |
+ | 3 | | |
+ | 4 | | |
+
+1. **Switch models and mirroring support** - To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to:
+
+ | **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** |
+ |--|--|--|--|
+ | 1 | | |
+ | 2 | | |
+ | 3 | | |
+ | 4 | | |
+
+1. **Third-party switch management** - Does a third party manage the switches? Y or N
+
+ If yes, who? __________________________________
+
+ What is their policy? __________________________________
+
+ For example:
+
+ - Siemens
+
+ - Rockwell automation ΓÇô Ethernet or IP
+
+ - Emerson ΓÇô DeltaV, Ovation
+
+1. **Serial connection** - Are there devices that communicate via a serial connection in the network? Yes or No
+
+ If yes, specify which serial communication protocol: ________________
+
+ If yes, mark on the network diagram what devices communicate with serial protocols, and where they are:
+
+ *Add your network diagram with marked serial connection*
+
+1. **Quality of Service** - For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
+
+ Business unit (BU): ________________
+
+1. **Sensor** - Specifications for site equipment
+
+ The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
+
+ Provide address details for the sensor NIC that will be connected in the corporate network:
+
+ | Item | Appliance 1 | Appliance 2 | Appliance 3 |
+ |--|--|--|--|
+ | Appliance IP address | | | |
+ | Subnet | | | |
+ | Default gateway | | | |
+ | DNS | | | |
+ | Host name | | | |
+
+1. **iDRAC/iLO/Server management**
+
+ | Item | Appliance 1 | Appliance 2 | Appliance 3 |
+ |--|--|--|--|
+ | Appliance IP address | | | |
+ | Subnet | | | |
+ | Default gateway | | | |
+ | DNS | | | |
+
+1. **On-premises management console**
+
+ | Item | Active | Passive (when using HA) |
+ |--|--|--|
+ | IP address | | |
+ | Subnet | | |
+ | Default gateway | | |
+ | DNS | | |
+
+1. **SNMP**
+
+ | Item | Details |
+ |--|--|
+ | IP | |
+ | IP address | |
+ | Username | |
+ | Password | |
+ | Authentication type | MD5 or SHA |
+ | Encryption | DES or AES |
+ | Secret key | |
+ | SNMP v2 community string |
+
+1. **On-premises management console SSL certificate**
+
+ Are you planning to use an SSL certificate? Yes or No
+
+ If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
+
+1. **SMTP authentication**
+
+ Are you planning to use SMTP to forward alerts to an email server? Yes or No
+
+ If yes, what authentication method will you use?
+
+1. **Active Directory or local users**
+
+ Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
+
+1. IoT device types in the network
+
+ | Device type | Number of devices in the network | Average bandwidth |
+ | | | -- |
+ | Camera | |
+ | X-ray machine | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+
+## Next steps
+
+For more information, see:
+
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Best practices for planning your OT network monitoring](plan-network-monitoring.md)
+- [Prepare your network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Last updated 03/03/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article serves as an archive for features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago.
+This article serves as an archive for features and enhancements released for Microsoft Defender for IoT for organizations more than nine months ago.
For more recent updates, see [What's new in Microsoft Defender for IoT?](release-notes.md).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Last updated 03/22/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last 6 months.
+This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last nine months.
-Features released earlier than 6 months ago are listed in [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
+Features released earlier than nine months ago are listed in [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
+| 22.1.4 | 04/2022 | 12/2022 |
| 22.1.3 | 03/2022 | 11/2022 | | 22.1.1 | 02/2022 | 10/2022 | | 10.5.5 | 12/2021 | 09/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## April 2022
+
+**Sensor software version**: 22.1.4
+
+### Extended device property data in the Device inventory
+
+Starting for sensors updated to version 22.1.4, the **Device inventory** page on the Azure portal shows extended data for the following fields:
+
+- **Description**
+- **Tags**
+- **Protocols**
+- **Scanner**
+- **Last Activity**
+
+For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
+ ## March 2022
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
Title: Get started with Enterprise IoT
-description: In this tutorial, you will learn how to onboard to Microsoft Defender for IoT with an Enterprise IoT deployment
+ Title: Get started with enterprise IoT - Microsoft Defender for IoT
+description: In this tutorial, you'll learn how to onboard to Microsoft Defender for IoT with an Enterprise IoT deployment
Last updated 12/12/2021
In this tutorial, you learn how to:
## Prerequisites
-An Azure subscription is required for this tutorial.
+Before you start, make sure that you have the following:
-If you don't already have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Completed [Quickstart: Get started with Defender for IoT](getting-started.md) so that you have an Azure subscription added to Defender for IoT. If you already have a subscription that is onboarded for Microsoft Defender for IoT for OT environments, you'll need to perform the same procedure again to add a new subscription.
-If you already have a subscription that is onboarded for Microsoft Defender for IoT for OT environments, you will need to create a new subscription. To learn how to onboard a subscription, see [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
+- The following Azure permissions:
-There is a minimum security level needed to access different parts of Microsoft Defender for IoT. You must have a level of Security Owner, or a Subscription contributor of the subscription to onboard a subscription, and commit to a pricing. Security Reader level permissions to access the Defender for IoT user interface.
++
+There's a minimum security level needed to access different parts of Microsoft Defender for IoT. You must have a level of Security Owner, or a Subscription contributor of the subscription to onboard a subscription, and commit to a pricing plan. Security Reader level permissions to access the Defender for IoT user interface.
The following table describes user access permissions to Microsoft Defender for IoT portal tools:
The following table describes user access permissions to Microsoft Defender for
## Set up a server or Virtual Machine (VM)
-Before you deploy your Enterprise IoT sensor, you will need to configure your server, or VM, and connect a Network Interface Card (NIC) to a switch monitoring (SPAN) port.
+Before you deploy your Enterprise IoT sensor, you'll need to configure your server, or VM, and connect a Network Interface Card (NIC) to a switch monitoring (SPAN) port.
**To set up a server, or VM**:
Run the command that you received, and saved when you registered the Enterprise
* If yes, select **Yes**.
-1. (Optional) If you are setting up a proxy server.
+1. (Optional) If you're setting up a proxy server.
1. Enter the proxy server host, and select **Ok**.
The installation will now finish.
sudo docker logs -f compose_attributes-collector_1 ```
- Ensure that packets are being sent to the Event Hub.
+ Ensure that packets are being sent to the Event Hubs.
## View your enterprise IoT devices in the Enterprise IoT device inventory
-Once you have validated your setup, the device inventory will start to populate with all of your devices after 15 minutes.
+Once you've validated your setup, the device inventory will start to populate with all of your devices after 15 minutes.
**To view your populated device inventory**:
Once you have validated your setup, the device inventory will start to populate
1. From the left side toolbar, select **Device inventory**.
-The device inventory is where you will be able to view all of your device systems, and network information. Learn more about the device inventory see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations).
+The device inventory is where you'll be able to view all of your device systems, and network information. Learn more about the device inventory see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations).
## Remove the sensor (optional)
sudo apt purge -y microsoft-eiot-sensor
## Next steps
-[Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations)
+For more information, see:
+
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
+- [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)
+- [View and manage alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)
+- [Use Azure Monitor workbooks in Microsoft Defender for IoT (Public preview)](workbooks.md)
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Title: Microsoft Defender for IoT trial setup
-description: In this tutorial, you will learn how to onboard to Microsoft Defender for IoT with a virtual sensor, on a virtual machine, with a trial subscription of Microsoft Defender for IoT.
+ Title: Get started with Microsoft Defender for IoT for OT security
+description: This tutorial describes how to use Microsoft Defender for IoT to set up a network for OT system security.
Previously updated : 11/09/2021- Last updated : 03/24/2022
-# Tutorial: Microsoft Defender for IoT trial setup
+# Tutorial: Get started with Microsoft Defender for IoT for OT security
-This tutorial will help you learn how to onboard to Microsoft Defender for IoT with a virtual sensor, on a virtual machine, with a trial subscription of Microsoft Defender for IoT. This tutorial will show you the optimal setup for someone who wishes to test Microsoft Defender for IoT, before signing up, and incorporating it into their environment.
+This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
-By using virtual environments, along with the software needed to create a sensor, Defender for IoT allows you to:
--- Use passive, agentless network monitoring to gain a complete inventory of all your IoT, and OT devices, their details, and how they communicate, with zero effect on the IoT, and OT network.--- Identify risks and vulnerabilities in your IoT, and OT environment. For example, identify unpatched devices, open ports, unauthorized applications, and unauthorized connections. You can also identify changes to device configurations, PLC code, and firmware.--- Detect anomalous or unauthorized activities with specialized IoT, and OT-aware threat intelligence and behavioral analytics. You can even detect advanced threats missed by static IOCs, like zero-day malware, fileless malware, and living-off-the-land tactics.--- Integrate into Microsoft Sentinel for a bird's-eye view of your entire organization. Implement unified IoT, and OT security governance with integration into your existing workflows, including third-party tools like Splunk, IBM QRadar, and ServiceNow.
+> [!NOTE]
+> If you're looking to set up security monitoring for enterprise IoT systems, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) instead.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Onboard with Microsoft Defender for IoT
-> * Download the ISO for the virtual sensor
-> * Create a virtual machine for the sensor
+> * Download software for a virtual sensor
+> * Create a VM for the sensor
> * Install the virtual sensor software > * Configure a SPAN port
-> * Onboard, and activate the virtual sensor
+> * Verify your cloud connection
+> * Onboard and activate the virtual sensor
## Prerequisites -- Permissions: Azure **Subscription Owners**, or **Subscription Contributors** level.--- At least one device to monitor connected to a SPAN port on the switch.--- Either VMware (ESXi 5.5 or later), or Hyper-V hypervisor (Windows 10 Pro or Enterprise) is installed and operational.--- An Azure account. If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).-
-## Onboard with Microsoft Defender for IoT
-
-To get started with Microsoft Defender for IoT, you must have a Microsoft Azure subscription. If you do not have a subscription, you can [create your Azure free account today](https://azure.microsoft.com/free/).
-
-To evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports up to 1000 committed devices. The trial allows you to deploy a virtual sensor on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to deploy a virtual on-premises management console to view the aggregated information generated by the sensor.
-
-**To onboard a subscription to Microsoft Defender for IoT**:
-
-1. Navigate to the [Azure portal](https://portal.azure.com/).
-
-1. Search for, and select **Microsoft Defender for IoT**.
+Before you start, make sure that you have the following:
-1. Select **Onboard subscription**.
+- Completed [Quickstart: Get started with Defender for IoT](getting-started.md) so that you have an Azure subscription added to Defender for IoT.
- :::image type="content" source="media/tutorial-onboarding/onboard-subscription.png" alt-text="Screenshot of the selecting the onboard subscription button from the Getting started page.":::
+- Azure permissions of **Security admin**, **Subscription contributor**, or **Subscription owner** on your subscription
-1. On the Pricing page, select **Start with a trial**.
+- At least one device to monitor, with the device connected to a SPAN port on a switch.
- :::image type="content" source="media/tutorial-onboarding/start-with-trial.png" alt-text="Screenshot of the start with a trial button to open the trial window.":::
+- VMware, ESXi 5.5 or later, installed and operational:
-1. Select a subscription from the Onboard trial subscription pane and then select **Evaluate**.
-1. Confirm your evaluation.
+- <a name="hw"></a>Available hardware resources for your VM as follows:
-## Download the ISO for the virtual sensor
+ | Deployment type | Corporate | Enterprise | SMB |
+ |--|--|--|--|
+ | **Maximum bandwidth** | 2.5 Gb/sec | 800 Mb/sec | 160 Mb/sec |
+ | **Maximum protected devices** | 12,000 | 10,000 | 800 |
-The virtual appliances have minimum specifications that are required for both the sensor and management console. The following table shows the specifications needed for the sensor depending on your environment.
+- Details for the following network parameters to use for your sensor appliance:
-### Virtual sensor
+ - A management network IP address
+ - A sensor subnet mask
+ - An appliance hostname
+ - A DNS address
+ - A default gateway
+ - Any input interfaces
-| Type | Corporate | Enterprise | SMB |
-|--|--|--|--|
-| vCPU | 32 | 8 | 4 |
-| Memory | 32 GB | 32 GB | 8 GB |
-| Storage | 5.6 TB | 1.8 TB | 500 GB |
-**To download the ISO file for the virtual sensor**:
+## Download software for your virtual sensor
-1. Navigate to the [Azure portal](https://portal.azure.com/).
+Defender for IoT's solution for OT security includes on-premises network sensors, which connect to Defender for IoT and send device data for analysis.
-1. Search for, and select **Microsoft Defender for IoT**.
+You can either purchase pre-configured appliances or bring your own appliance and install the software yourself. This tutorial uses your own machine and VMware and describes how to download and install the sensor software yourself.
-1. On the Getting started page, select the **Sensor** tab.
+**To download software for your virtual sensors**:
-1. Select **Download**.
+1. Go to Defender for IoT in the Azure portal. On the **Getting started** page, select the **Sensor** tab.
- :::image type="content" source="media/tutorial-onboarding/sensor-download.png" alt-text="Screenshot of the sensor tab, select download, to download the ISO file for the virtual sensor.":::
+1. In the **Purchase an appliance and install software** box, ensure that the default option is selected for the latest and recommended software version, and then select **Download**.
-## Create a virtual machine for the sensor
+1. Save the downloaded software in a location that will be accessible from your VM.
-The virtual sensor supports both VMware, and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
+## Create a VM for your sensor
-- VMware (ESXi 5.5 or later), or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational.
+This procedure describes how to create a VM for your sensor with VMware ESXi.
-- Available hardware resources for the virtual machine.
+Defender for IoT also supports other processes, such as using Hyper-V or physical sensors. For more information, see [Defender for IoT installation](how-to-install-software.md).
-- ISO installation file for the Microsoft Defender for IoT sensor.
+**To create a VM for your sensor**:
-- Make sure the hypervisor is running.-
-### Create the virtual machine for the sensor with ESXi
-
-**To create the virtual machine for the sensor (ESXi)**:
+1. Make sure that you have the sensor software downloaded and accessible, and that VMware is running on your machine.
1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
The virtual sensor supports both VMware, and Hyper-V deployment options. Before
1. Select **Create new virtual machine**, and then select **Next**.
-1. Add a sensor name and choose:
+1. Add a sensor name and then define the following options:
- Compatibility: **&lt;latest ESXi version&gt;**
The virtual sensor supports both VMware, and Hyper-V deployment options. Before
1. Choose the relevant datastore and select **Next**.
-1. Change the virtual hardware parameters according to the required [architecture](#download-the-iso-for-the-virtual-sensor).
+1. Change the virtual hardware parameters according to the required specifications for your needs. For more information, see the [table in the Prerequisites](#hw) section above.
-1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and select the Defender for IoT software you'd [downloaded earlier](#download-software-for-your-virtual-sensor).
1. Select **Next** > **Finish**. 1. Power on the VM, and open a console.
-### Create a virtual machine for the sensor with Hyper-V
-
-This procedure describes how to create a virtual machine by using Hyper-V.
-
-**To create a virtual machine with Hyper-V**:
-
-1. Create a virtual disk in Hyper-V Manager.
-
-1. Select **format = VHDX**.
-
-1. Select **type = Dynamic Expanding**.
-
-1. Enter the name and location for the VHD.
-
-1. Enter the required size (according to the [architecture](#download-the-iso-for-the-virtual-sensor)).
-
-1. Review the summary and select **Finish**.
-
-1. On the **Actions** menu, create a new virtual machine.
-
-1. Enter a name for the virtual machine.
+## Install sensor software
-1. Select **Specify Generation** > **Generation 1**.
-
-1. Specify the memory allocation (according to the [architecture](#download-the-iso-for-the-virtual-sensor)) and select the check box for dynamic memory.
-
-1. Configure the network adaptor according to your server network topology.
-
-1. Connect the VHDX created previously to the virtual machine.
-
-1. Review the summary and select **Finish**.
-
-1. Right-click the new virtual machine and select **Settings**.
-
-1. Select **Add Hardware** and add a new network adapter.
-
-1. Select the virtual switch that will connect to the sensor management network.
-
-1. Allocate CPU resources (according to the [architecture](#download-the-iso-for-the-virtual-sensor)).
-
-1. Connect the management console's ISO image to a virtual DVD drive.
-
-1. Start the virtual machine.
-
-1. On the **Actions** menu, select **Connect** to continue the software installation.
-
-## Install the virtual sensor software with ESXi or Hyper-V
-
-Either ESXi, or Hyper-V can be used to install the software for the virtual sensor.
+This procedure describes how to install the sensor software on your VM.
**To install the software on the virtual sensor**:
-1. Open the virtual machine console.
+1. Open the VM console.
1. The VM will start from the ISO image, and the language selection screen will appear. Select **English**.
-1. Select the required [architecture](#download-the-iso-for-the-virtual-sensor).
+1. Select the required specifications for your needs, as defined in the [table in the Prerequisites](#hw) section above.
-1. Define the appliance profile and network properties:
+1. Define the appliance profile and network properties as follows:
| Parameter | Configuration | | -| - |
- | **Hardware profile** | Based on the required [architecture](#download-the-iso-for-the-virtual-sensor). |
+ | **Hardware profile** | Depending on your [system specifications](#hw). |
| **Management interface** | **ens192** | | **Network parameters (provided by the customer)** | **management network IP address:** <br/>**subnet mask:** <br>**appliance hostname:** <br/>**DNS:** <br/>**default gateway:** <br/>**input interfaces:**|
- | **bridge interfaces:** | There's no need to configure the bridge interface. This option is for special use cases only. |
+
+ You don't need to configure the bridge interface, which is relevant for special use cases only.
1. Enter **Y** to accept the settings.
-1. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required to sign-in, and manage your device. The username and password will not be presented again.
+1. The following credentials are automatically generated and presented. Copy the usernames and passwords to a safe place, because they're required to sign-in and manage your sensor. The usernames and passwords won't be presented again.
- **Support**: The administrative user for user management. - **CyberX**: The equivalent of root for accessing the appliance.
-1. The appliance restarts.
-
-1. Access the sensor via the IP address previously configured: `https://ip_address`.
+1. When the appliance restarts, access the sensor via the IP address previously configured: `https://<ip_address>`.
### Post-installation validation
-To validate the installation of a physical appliance, you need to perform many tests.
+This procedure describes how to validate your installation using the sensor's own system health checks, and is available to both the **Support** and **CyberX** sensor users.
-The validation is available to both the **Support**, and **CyberX** user.
-
-**To access the post validation tool**:
+**To validate your installation**:
1. Sign in to the sensor.
-1. Select **System Settings**> **Health and troubleshooting** > **System Health Check**.
-
-1. Select a command.
-
-For post-installation validation, test that:
-- the system is running-- you have the right version-- all of the input interfaces that were configured during the installation process are running
+1. Select **System Settings**> **Sensor management** > **System Health Check**.
-**To verify that the system is running**:
+1. Select the following commands:
-1. Select **Appliance**, and ensure that each line item shows `Running` and the bottom line states `System is up`.
-
-1. Select **Version**, and ensure that the correct version appears.
-
-1. Select **ifconfig** to display the parameters for the appliance's physical interfaces, and ensure that they are correct.
+ - **Appliance** to check that the system is running. Verify that each line item shows **Running** and that the last line states that the **System is up**.
+ - **Version** to verify that you have the correct version installed.
+ - **ifconfig** to verify that all input interfaces configured during installation are running.
## Configure a SPAN port
-A virtual switch does not have mirroring capabilities. However, you can use promiscuous mode in a virtual switch environment. Promiscuous mode is a mode of operation, as well as a security, monitoring and administration technique, that is defined at the virtual switch, or portgroup level. By default, Promiscuous mode is disabled. When Promiscuous mode is enabled the virtual machineΓÇÖs network interfaces that are in the same portgroup will use the Promiscuous mode to view all network traffic that goes through that virtual switch. You can implement a workaround with either ESXi, or Hyper-V.
+Virtual switches don't have mirroring capabilities. However, for the sake of this tutorial you can use *promiscuous mode* in a virtual switch environment to view all network traffic that goes through the virtual switch.
+
+This procedure describes how to configure a SPAN port using a workaround with VMware ESXi.
-### Configure a SPAN port with ESXi
+> [!NOTE]
+> Promiscuous mode is an operating mode and a security monitoring technique for a VM's interfaces in the same portgroup level as the virtual switch to view the switch's network traffic. Promiscuous mode is disabled by default but can be defined at the virtual switch or portgroup level.
+>
**To configure a SPAN port with ESXi**:
A virtual switch does not have mirroring capabilities. However, you can use prom
1. Connect to the sensor, and verify that mirroring works.
-### Configure a SPAN port with Hyper-V
-
-Prior to starting you will need to:
--- Ensure that there is no instance of a virtual appliance running.--- Enable Ensure SPAN on the data port, and not the management port.
+## Verify cloud connections
-- Ensure that the data port SPAN configuration is not configured with an IP address.
+This tutorial describes how to create a cloud-connected sensor, connecting directly to the Defender for IoT on the cloud.
-**To configure a SPAN port with Hyper-V**:
+Before continuing, make sure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
-1. Open the Virtual Switch Manager.
+- **IoT Hub**: `*.azure-devices.net`
+- **Threat Intelligence**: `*.blob.core.windows.net`
+- **Eventhub**: `*.servicebus.windows.net`
-1. In the Virtual Switches list, select **New virtual network switch** > **External** as the dedicated spanned network adapter type.
+> [!TIP]
+> Defender for IoT supports other cloud-connection methods, including proxies or multi-cloud vendors. For more information, see [OT sensor cloud connection methods](architecture-connections.md), [Connect your OT sensors to the cloud](connect-sensors.md), [Cloud-connected vs local sensors](architecture.md#cloud-connected-vs-local-sensors).
+>
- :::image type="content" source="media/tutorial-onboarding/new-virtual-network.png" alt-text="Screenshot of selecting new virtual network and external before creating the virtual switch.":::
+## Onboard and activate the virtual sensor
-1. Select **Create Virtual Switch**.
-
-1. Under connection type, select **External Network**.
-
-1. Ensure the checkbox for **Allow management operating system to share this network adapter** is checked.
-
- :::image type="content" source="media/tutorial-onboarding/external-network.png" alt-text="Select external network, and allow the management operating system to share the network adapter.":::
-
-1. Select **OK**.
-
-#### Attach a SPAN Virtual Interface to the virtual switch
-
-You are able to attach a SPAN Virtual Interface to the Virtual Switch through Windows PowerShell, or through Hyper-V Manager.
-
-**To attach a SPAN Virtual Interface to the virtual switch with PowerShell**:
-
-1. Select the newly added SPAN virtual switch, and add a new network adapter with the following command:
-
- ```bash
- ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
- ```
-
-1. Enable port mirroring for the selected interface as the span destination with the following command:
-
- ```bash
- Get-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 | ? Name -eq Monitor | Set-VMNetworkAdapter -PortMirroring Destination
- ```
-
- | Parameter | Description |
- |--|--|
- | VK-C1000V-LongRunning-650 | CPPM VA name |
- |vSwitch_Span |Newly added SPAN virtual switch name |
- |Monitor |Newly added adapter name |
-
-1. Select **OK**.
-
-These commands set the name of the newly added adapter hardware to be `Monitor`. If you are using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
-
-**To attach a SPAN Virtual Interface to the virtual switch with Hyper-V Manager**:
-
-1. Under the Hardware list, select **Network Adapter**.
-
-1. In the Virtual Switch field, select **vSwitch_Span**.
-
- :::image type="content" source="media/tutorial-onboarding/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
-
-1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**.
-
-1. In the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
-
- :::image type="content" source="media/tutorial-onboarding/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode.":::
-
-1. Select **OK**.
-
-#### Enable Microsoft NDIS Capture Extensions for the Virtual Switch
-
-Microsoft NDIS Capture Extensions will need to be enabled for the new virtual switch.
-
-**To enable Microsoft NDIS Capture Extensions for the newly added virtual switch**:
-
-1. Open the Virtual Switch Manager on the Hyper-V host.
-
-1. In the Virtual Switches list, expand the virtual switch name `vSwitch_Span` and select **Extensions**.
-
-1. In the Switch Extensions field, select **Microsoft NDIS Capture**.
-
- :::image type="content" source="media/tutorial-onboarding/microsoft-ndis.png" alt-text="Screenshot of enabling the Microsoft NDIS by selecting it from the switch extensions menu.":::
-
-1. Select **OK**.
-
-#### Set the Mirroring Mode on the external port
-
-Mirroring mode will need to be set on the external port of the new virtual switch to be the source.
-
-You will need to configure the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port, to the virtual network adapter that you configured as the destination.
-
-Use the following PowerShell commands to set the external virtual switch port to source mirror mode:
-
-```bash
-$ExtPortFeature=Get-VMSystemSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings"
-$ExtPortFeature.SettingData.MonitorMode=2
-Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitchExtensionFeature $ExtPortFeature
-```
-
-| Parameter | Description |
-|--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name. |
-| MonitorMode=2 | Source |
-| MonitorMode=1 | Destination |
-| MonitorMode=0 | None |
-
-Use the following PowerShell command to verify the monitoring mode status:
-
-```bash
-Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData
-```
-
-| Parameter | Description |
-|--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name |
-## Onboard, and activate the virtual sensor
-
-Before you can start using your Defender for IoT sensor, you will need to onboard the created virtual sensor to your Azure subscription, and download the virtual sensor's activation file to activate the sensor.
+Before you can start using your Defender for IoT sensor, you'll need to onboard the created virtual sensor to your Azure subscription and download the virtual sensor's activation file to activate the sensor.
### Onboard the virtual sensor **To onboard the virtual sensor:**
-1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
+1. In the Azure portal, go to the [**Defender for IoT > Getting started**](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) page.
-1. Select **Onboard sensor**.
+1. At the bottom left, select **Set up OT/ICS Security**.
:::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of selecting to onboard the sensor to start the onboarding process for your sensor.":::
-1. Enter a name for the sensor.
+ In the **Set up OT/ICS Security** page, you can leave the **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAP** steps collapsed, because you've completed these tasks earlier in this tutorial.
- We recommend that you include the IP address of the sensor as part of the name, or use an easily identifiable name. Naming your sensor in this way will ensure easier tracking.
+1. In **Step 3: Register this sensor with Microsoft Defender for IoT**, define the following values:
-1. Select a subscription from the drop-down menu.
+ |Name |Description |
+ |||
+ |**Sensor name** | Enter a name for the sensor. <br><br>We recommend that you include the IP address of the sensor as part of the name, or use an easily identifiable name. Naming your sensor in this way will ensure easier tracking. |
+ |**Subscription** | Select the Azure subscription where you want to add your sensors. |
+ |**Cloud connected** | Select to connect your sensor to Azure. |
+ |**Automatic threat intelligence updates** | Displayed only when the **Cloud connected** option is toggled on. Select to have Microsoft threat intelligence packages automatically updated on your sensor. For more information, see [Threat intelligence research and packages #](how-to-work-with-threat-intelligence-packages.md). |
+ |**Sensor version** | Displayed only when the **Cloud connected** option is toggled on. Select the software version installed on your sensor. |
+ |**Site** | Define the site where you want to associate your sensor, or select **Create site** to create a new site. Define a display name for your site and optional tags to help identify the site later. |
+ |**Zone** | Define the zone where you want to deploy your sensor, or select **Create zone** to create a new one. |
- :::image type="content" source="media/tutorial-onboarding/name-subscription.png" alt-text="Screenshot of entering a meaningful name, and connect your sensor to a subscription.":::
+1. Select **Register** to add your sensor to Defender for IoT. A success message is displayed and your activation file is automatically downloaded. The activation file is unique for your sensor and contains instructions about your sensor's management mode.
-1. Choose a sensor connection mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed.
+1. Save the downloaded activation file in a location that will be accessible to the user signing into the console for the first time.
- - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered to Defender for Cloud on Azure and can be shared with other Azure services, such as Microsoft Sentinel. In addition, threat intelligence packages can be pushed from Defender for IoT to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+1. At the bottom left of the page, select **Finish**. You can now see your new sensor listed on the Defender for IoT **Sites and sensors** page.
- For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
- For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
+### Activate your sensor
- - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
+This procedure describes how to use the sensor activation file downloaded from Defender for IoT in the Azure portal to activate your newly added sensor.
-1. Select a site to associate your sensor to. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
+**To activate your sensor**:
-1. Select **Register**.
+1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
-### Download the sensor activation file
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign-in page.":::
-Once registration is complete for the sensor, you will be able to download an activation file for the sensor. The sensor activation file contains instructions about the management mode of the sensor. The activation file you download, will be unique for each sensor that you deploy. The user who signs in to the sensor console for the first time, will uploads the activation file to the sensor.
+1. Enter the credentials defined during the sensor installation.
-**To download an activation file:**
+1. Select **Login/Next**. The **Sensor Network Settings** tab opens.
-1. On the **Onboard Sensor** page, select **Register**
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="Screenshot of the sensor network settings options when signing into the sensor.":::
-1. Select **download activation file**.
+1. In the **Sensor Network Settings** tab, you can modify the sensor network configuration defined during installation. For the sake of this tutorial, leave the default values as they are, and select **Next**.
-1. Make the file accessible to the user who's signing in to the sensor console for the first time.
+1. In the **Activation** tab, select **Upload**, and then browse to and select your activation file.
-### Sign in and activate the sensor
+1. Approve the terms and conditions and then select **Activate**.
-**To sign in and activate:**
+1. In the **SSL/TLS Certificates** tab, you can import a trusted CA certificate, which is the recommended process for production environments. However, for the sake of the tutorial, you can select **Use Locally generated self-signed certificate**, and then select **Finish**.
-1. Go to the sensor console from your browser by using the IP defined during the installation.
+Your sensor is activated and onboarded to Defender for IoT. In the **Sites and sensors** page, you can see that the **Sensor status** column shows a green check mark, and lists the status as **OK**.
- :::image type="content" source="media/tutorial-onboarding/defender-for-iot-sensor-log-in-screen.png" alt-text="Screenshot of the Microsoft Defender for IoT sensor.":::
-1. Enter the credentials defined during the sensor installation.
-1. Select **Log in** and follow the instructions described in [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
+## Next steps
+After your OT sensor is connection, continue with any of the following to start analyzing your data:
+- [View assets from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
-## Next steps
+- [Manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
-Learn how to set up [other appliances](how-to-install-software.md#about-defender-for-iot-appliances).
+- [Detect threats with Microsoft Sentinel](../../sentinel/iot-solution.md?toc=/azure/defender-for-iot/organizations/toc.json&bc=/azure/defender-for-iot/breadcrumb/toc.json)
+For more information, see:
-Read about the [agentless architecture](architecture.md).
+- [Defender for IoT installation](how-to-install-software.md)
+- [Microsoft Defender for IoT system architecture](architecture.md)
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
Title: Move a DevTest lab to another region description: Shows you how to move a lab to another region.-+ Last updated 03/03/2022
Last updated 03/03/2022
To move a lab, create a copy of an existing lab in another region. In this article, you'll learn how to:- > [!div class="checklist"]
->
+> >
> - Export an Azure Resource Manager (ARM) template of your lab. > - Modify the template by adding or updating the target region and other parameters. > - Deploy the template to create the new lab in the target region. > - Configure the new lab. > - Move data to the new lab. > - Delete the resources in the source region.- ## Prerequisites - Ensure that the services and features that your account uses are supported in the target region.
In this article, you'll learn how to:
- For preview features, ensure that your subscription is allowlisted for the target region. - DevTest Labs doesn't store them nor expose passwords from the exported ARM template. You will need to know the passwords/secrets for:
- - the VMs
- - the Stored Secrets
- - PAT tokens of the private Artifact Repos to move the private repos together with the lab.
-
-<a id="prepare"></a>
+ - the VMs
+ - the Stored Secrets
+ - PAT tokens of the private Artifact Repos to move the private repos together with the lab.
## Prepare to move
To get started, export and modify a Resource Manager template.
1. If you don't have [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) under the target region, create one now.
-1. Move your current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)".
+1. Move your current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)".
+
+ Alternately, you can create a new virtual network, if you don't have to keep the original one.
- Alternately, you can create a new virtual network, if you don't have to keep the original one.
-
### Export an ARM template of your lab. Next, you'll export a JSON template contains settings that describe your lab.
To export a template by using Azure portal:
1. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
- This zip file contains the .json files that comprise the template and scripts to deploy the template. It contains all the resources under your lab listed in ARM template format, except for the Shared Image Gallery resources.
+ This zip file contains the .json files that comprise the template and scripts to deploy the template. It contains all the resources under your lab listed in ARM template format, except for the Shared Image Gallery resources.
### Modify the template
-In order for the ARM template to deploy correctly in the new region, you must change a few parts of the template.
+In order for the ARM template to deploy correctly in the new region, you must change a few parts of the template.
To update the template by using Azure portal: 1. In the Azure portal, select **Create a resource**.
-2. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
+1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
-3. Select **Template deployment**.
+1. Select **Template deployment**.
- ![Azure Resource Manager templates library](../storage/common/media/storage-account-move/azure-resource-manager-template-library.png)
+ ![Azure Resource Manager templates library](../storage/common/media/storage-account-move/azure-resource-manager-template-library.png)
-4. Select **Create**.
+1. Select **Create**.
-5. Select **Build your own template in the editor**.
+1. Select **Build your own template in the editor**.
-6. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+1. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
-7. In the editor, make the following changes to the **template.json** file:
+1. In the editor, make the following changes to the **template.json** file:
1. Replace the original `location` with the new region in which you want to deploy, such as `westus2`, `southeastasia`, etc. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = `centralus`.
To update the template by using Azure portal:
``` 1. If you have "All virtual machines in one resource group" set in the "Lab settings", also update the following in the ARM template:+ + Update the `apiVersion` of the `microsoft.devtestlab/labs` resource to `2018-10-15-preview`. + Add `vmCreationResourceGroupId` to the `properties` section.
To update the template by using Azure portal:
``` 1. Find the `"type": "microsoft.devtestlab/labs/users"` resource. There, remove the entire `secretStore` section, including the `keyVaultld` and the `keyVaultUri` parameters.+ ```json secretStore": { "keyVaultUri": "<vaultvalue>"
To update the template by using Azure portal:
} ```
- 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the Vnet to a new region, you should skip this step.
+ 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the Vnet to a new region, you should skip this step.
+
+ 1. Find the `"type": "microsoft.devtestlab/labs/virtualmachines"` resource.
+
+ 1. Under the "properties", add `"password": "RANDOM_PASSWORD"`
- 1. Find the `"type": "microsoft.devtestlab/labs/virtualmachines"` resource.
- 1. Under the "properties", add ` "password": "RANDOM_PASSWORD" `
- > [!Note]
- > A "password" property is required to create a new VM. We input a random password because we will later be swapping the OS disk with the original VM.
-
- 1. For Shared IP virtual machines, add this snippet under the "properties.networkInterface",
+ > [!Note]
+ > A "password" property is required to create a new VM. We input a random password because we will later be swapping the OS disk with the original VM.
- Windows VM with RDP:
- ```
+ 1. For Shared IP virtual machines, add this snippet under the "properties.networkInterface",
+
+ Windows VM with RDP:
+
+ ```
+ "networkInterface": {
+ "sharedPublicIpAddressConfiguration": {
+ "inboundNatRules": [
+ {
+ "transportProtocol": "tcp",
+ "backendPort": 3389
+ }
+ ]
+ }
+ }
+ ```
+
+ Linux VM with SSH:
+
+ ```
"networkInterface": {
- "sharedPublicIpAddressConfiguration": {
- "inboundNatRules": [
- {
- "transportProtocol": "tcp",
- "backendPort": 3389
- }
- ]
- }
- }
- ```
-
- Linux VM with SSH:
- ```
- "networkInterface": {
- "sharedPublicIpAddressConfiguration": {
- "inboundNatRules": [
- {
- "transportProtocol": "tcp",
- "backendPort": 22
- }
- ]
- }
- }
- ```
-
- 1. Under the `microsoft.devtestlab/labs/users/secrets` resources, the following parameter the "properties". Replace `YOUR_STORED_PASSWORD` with your password.
-
- > [!IMPORTANT]
- > Use secureString for password values.
-
- ```json
- "value": "YOUR_STORED_PASSWORD"
- ```
-
- 1. Under the `microsoft.devtestlab/labs/artifactsources` resources, the following parameter the "properties". Replace `YOUR_STORED_PASSWORD` with your password. Again, use secureString for password values.
-
- ```json
- "securityToken": "YOUR_PAT_TOKEN_VALUE"
- ```
-
- 1. In the editor, save the template.
----
-<a id="move"></a>
-
-## Deploy to move
+ "sharedPublicIpAddressConfiguration": {
+ "inboundNatRules": [
+ {
+ "transportProtocol": "tcp",
+ "backendPort": 22
+ }
+ ]
+ }
+ }
+ ```
+
+ 1. Under the `microsoft.devtestlab/labs/users/secrets` resources, the following parameter the "properties". Replace `YOUR_STORED_PASSWORD` with your password.
+
+ > [!IMPORTANT]
+ > Use secureString for password values.
+ ```json
+ "value": "YOUR_STORED_PASSWORD"
+ ```
+
+ 1. Under the `microsoft.devtestlab/labs/artifactsources` resources, the following parameter the "properties". Replace `YOUR_STORED_PASSWORD` with your password. Again, use secureString for password values.
+
+ ```json
+ "securityToken": "YOUR_PAT_TOKEN_VALUE"
+ ```
+
+ 1. In the editor, save the template.
+
+## Deploy to move
Deploy the template to create a new lab in the target region.
-1. In the **Custom deployment** page, update all the parameters with the corresponding values defined in the template.
+1. In the **Custom deployment** page, update all the parameters with the corresponding values defined in the template.
+ 1. Enter the following values:
- |Name|Value|
- |-|-|
- |**Subscription**|Select an Azure subscription.|
- |**Resource group**|Select the resource group name you created in the last section. |
- |**Location**|Select a location for the lab. For example, **Central US**. |
- |**Lab Name**|Must be a different name. |
- |**Vnet ID**|Must be the moved one, or the new one you just created. |
+ |Name|Value|
+ |-|-|
+ |**Subscription**|Select an Azure subscription.|
+ |**Resource group**|Select the resource group name you created in the last section. |
+ |**Location**|Select a location for the lab. For example, **Central US**. |
+ |**Lab Name**|Must be a different name. |
+ |**Vnet ID**|Must be the moved one, or the new one you just created. |
+
1. Select **Review + create**.+ 1. Select **Create**.+ 1. Select the bell icon (notifications) from the top of the screen to see the deployment status. You shall see **Deployment in progress**. Wait until the deployment is completed. ### Configure the new lab
-While most Lab resources have been replicated under the new region using the ARM template, a few edits still need to be moved manually.
+While most Lab resources have been replicated under the new region using the ARM template, a few edits still need to be moved manually.
1. Add the Compute Gallery back to the lab if there're any in the original one.
-2. Add the policies "Virtual machines per user", "Virtual machines per lab" and "Allowed Virtual machine sizes" back to the moved lab
+1. Add the policies "Virtual machines per user", "Virtual machines per lab" and "Allowed Virtual machine sizes" back to the moved lab
-### Swap the OS disks of the Compute VMs under the new VMs.
-
-Note the VMs under the new Lab have the same specs as the ones under the old Lab. The only difference is their OS Disks.
+### Swap the OS disks of the Compute VMs under the new VMs.
+
+Note the VMs under the new Lab have the same specs as the ones under the old Lab. The only difference is their OS Disks.
1. Create an empty disks under the new region.+ - Get the target Compute VM OS disk name under the new Lab. You can fnd the Compute VM and its disk under the Resource group on the lab's Virtual Machine page. - Use [AzCopy](../storage/common/storage-use-azcopy-v10.md) to copy the old disk content into the new/empty disks in the new region. You can run the Powershell commands from your Dev Box or from the [Azure Cloud Shell](../cloud-shell/quickstart-powershell.md). AzCopy is the preferred tool to move your data over. It's optimized for performance. One way that it's faster, is that data is copied directly, so AzCopy doesn't use the network bandwidth of your computer. Use AzCopy at the command line or as part of a custom script. See [Get started with AzCopy](../storage/common/storage-use-azcopy-v10.md).
- ```powershell
- # Fill in the source/target disk names and their resource group names
- $sourceDiskName = "SOURCE_DISK"
- $sourceRG = "SOURCE_RG"
- $targetDiskName = "TARGET_DISK"
- $targetRG = "TARGET_RG"
- $targetRegion = "TARGET_LOCATION"
-
- # Create an empty target disk from the source disk
- $sourceDisk = Get-AzDisk -ResourceGroupName $sourceRG -DiskName $sourceDiskName
- $targetDiskconfig = New-AzDiskConfig -SkuName $sourceDisk.Sku.Name -UploadSizeInBytes $($sourceDisk.DiskSizeBytes+512) -Location $targetRegion -OsType $sourceDisk.OsType -CreateOption 'Upload'
- $targetDisk = New-AzDisk -ResourceGroupName $targetRG -DiskName $targetDiskName -Disk $targetDiskconfig
-
- # Copy the disk content from source to target
- $sourceDiskSas = Grant-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName -DurationInSecond 1800 -Access 'Read'
- $targetDiskSas = Grant-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName -DurationInSecond 1800 -Access 'Write'
- azcopy copy $sourceDiskSas.AccessSAS $targetDiskSas.AccessSAS --blob-type PageBlob
- Revoke-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName
- Revoke-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName
- ```
-
- After that, you'll have a new disk under the new region.
-
- 1. Swap the OS disk of the Compute VM under the new lab with the new disk. To learn how, see the article, "[Change the OS disk used by an Azure VM using PowerShell](../virtual-machines/windows/os-disk-swap.md)".
-
+ ```powershell
+ # Fill in the source/target disk names and their resource group names
+ $sourceDiskName = "SOURCE_DISK"
+ $sourceRG = "SOURCE_RG"
+ $targetDiskName = "TARGET_DISK"
+ $targetRG = "TARGET_RG"
+ $targetRegion = "TARGET_LOCATION"
+
+ # Create an empty target disk from the source disk
+ $sourceDisk = Get-AzDisk -ResourceGroupName $sourceRG -DiskName $sourceDiskName
+ $targetDiskconfig = New-AzDiskConfig -SkuName $sourceDisk.Sku.Name -UploadSizeInBytes $($sourceDisk.DiskSizeBytes+512) -Location $targetRegion -OsType $sourceDisk.OsType -CreateOption 'Upload'
+ $targetDisk = New-AzDisk -ResourceGroupName $targetRG -DiskName $targetDiskName -Disk $targetDiskconfig
+
+ # Copy the disk content from source to target
+ $sourceDiskSas = Grant-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName -DurationInSecond 1800 -Access 'Read'
+ $targetDiskSas = Grant-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName -DurationInSecond 1800 -Access 'Write'
+ azcopy copy $sourceDiskSas.AccessSAS $targetDiskSas.AccessSAS --blob-type PageBlob
+ Revoke-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName
+ Revoke-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName
+ ```
+
+ After that, you'll have a new disk under the new region.
+
+ 1. Swap the OS disk of the Compute VM under the new lab with the new disk. To learn how, see the article, "[Change the OS disk used by an Azure VM using PowerShell](../virtual-machines/windows/os-disk-swap.md)".
## Discard or clean up
-After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare) and [Move](#move) sections of this article.
+After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move) and [Move](#deploy-to-move) sections of this article.
To commit the changes and complete the move, you must delete the original lab.
To remove a lab by using the Azure portal:
1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **DevTest Labs** to display the list of labs.
-2. Locate the target lab to delete, and right-click the **More** button (**...**) on the right side of the listing.
+1. Locate the target lab to delete, and right-click the **More** button (**...**) on the right side of the listing.
-3. Select **Delete**, and confirm.
+1. Select **Delete**, and confirm.
## Next steps
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Last updated 02/20/2020
-# Known issues/migration limitations with online migrations from PostgreSQL to Azure DB for PostgreSQL
+# Known issues/limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL
Known issues and limitations associated with online migrations from PostgreSQL to Azure Database for PostgreSQL are described in the following sections. ## Online migration configuration -- The source PostgreSQL server must be running version 9.4, 9.5, 9.6, 10, or 11. For more information, see the article [Supported PostgreSQL Database Versions](../postgresql/concepts-supported-versions.md).-- Only migrations to the same or a higher version are supported. For example, migrating PostgreSQL 9.5 to Azure Database for PostgreSQL 9.6 or 10 is supported, but migrating from PostgreSQL 11 to PostgreSQL 9.6 isn't supported.-- To enable logical replication in the **source PostgreSQL postgresql.conf** file, set the following parameters:
- - **wal_level** = logical
- - **max_replication_slots** = [at least max number of databases for migration]; if you want to migrate four databases, set the value to at least 4.
- - **max_wal_senders** = [number of databases running concurrently]; the recommended value is 10
-- Add DMS agent IP to the source PostgreSQL pg_hba.conf
+- The source PostgreSQL server must be running version 9.4, 9.5, 9.6, 10, or 11. For more information, see [Supported PostgreSQL database versions](../postgresql/concepts-supported-versions.md).
+- Only migrations to the same or a higher version are supported. For example, migrating PostgreSQL 9.5 to Azure Database for PostgreSQL 9.6 or 10 is supported. Migrating from PostgreSQL 11 to PostgreSQL 9.6 isn't supported.
+- To enable logical replication in the *source PostgreSQL postgresql.conf* file, set the following parameters:
+
+ - **wal_level**: Set as logical.
+ - **max_replication_slots**: Set at least the maximum number of databases for migration. If you want to migrate four databases, set the value to at least 4.
+ - **max_wal_senders**: Set the number of databases running concurrently. The recommended value is 10.
+- Add DMS agent IP to the source PostgreSQL *pg_hba.conf*.
1. Make a note of the DMS IP address after you finish provisioning an instance of Azure Database Migration Service.
- 2. Add the IP address to the pg_hba.conf file as shown:
+ 1. Add the IP address to the *pg_hba.conf* file:
``` host all 172.16.136.18/10 md5
Known issues and limitations associated with online migrations from PostgreSQL t
## Size limitations -- You can migrate up to 1 TB of data from PostgreSQL to Azure DB for PostgreSQL using a single DMS service.-- The number of tables you can migrate in one DMS activity is limited based on the number of characters in your table names. An upper limit of 7,500 characters applies to the combined length of the schema_name.table_name. If the combined length of the schema_name.table_name exceeds this limit, you likely will see the error *(400) Bad Request.Entity too large*. To avoid this error, try to migrate your tables by using multiple DMS activities, with each activity adhering to the 7,500-character limit.
+- You can migrate up to 1 TB of data from PostgreSQL to Azure Database for PostgreSQL, using a single DMS service.
+- The number of tables you can migrate in one DMS activity is limited based on the number of characters in your table names. An upper limit of 7,500 characters applies to the combined length of schema_name.table_name. If the combined length of schema_name.table_name exceeds this limit, you'll see the error "(400) Bad Request. Entity too large." To avoid this error, try to migrate your tables by using multiple DMS activities. Each activity must adhere to the 7,500-character limit.
## Datatype limitations
- **Limitation**: If there's no primary key on tables, changes may not be synced to the target database.
+ **Limitation**: If there's no primary key on tables, changes might not be synced to the target database.
+
+ **Workaround**: Temporarily set a primary key for the table for migration to continue. Remove the primary key after data migration is finished.
- **Workaround**: Temporarily set a primary key for the table for migration to continue. You can remove the primary key after data migration is complete.
+## Limitations with online migration from AWS RDS PostgreSQL
-## Limitations when migrating online from AWS RDS PostgreSQL
+When you try to perform an online migration from Amazon Web Service (AWS) Relational Database (RDS) PostgreSQL to Azure Database for PostgreSQL, you might encounter the following errors:
-When you try to perform an online migration from AWS RDS PostgreSQL to Azure Database for PostgreSQL, you may encounter the following errors.
+- **Error**: The default value of column '{column}' in table '{table}' in database '{database}' is different on source and target servers. It's '{value on source}' on source and '{value on target}' on target.
-- **Error**: The Default value of column '{column}' in table '{table}' in database '{database}' is different on source and target servers. It's '{value on source}' on source and '{value on target}' on target.
+ **Limitation**: This error occurs when the default value on a column schema differs between the source and target databases.
- **Limitation**: This error occurs when the default value on a column schema is different between the source and target databases.
- **Workaround**: Ensure that the schema on the target matches schema on the source. For detail on migrating schema, refer to the [Azure PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
+ **Workaround**: Ensure that the schema on the target matches the schema on the source. For more information on migrating the schema, see the [Azure Database for PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
-- **Error**: Target database '{database}' has '{number of tables}' tables where as source database '{database}' has '{number of tables}' tables. The number of tables on source and target databases should match.
+- **Error**: Target database '{database}' has '{number of tables}' tables whereas source database '{database}' has '{number of tables}' tables. The number of tables on source and target databases should match.
- **Limitation**: This error occurs when the number of tables is different between the source and target databases.
+ **Limitation**: This error occurs when the number of tables differs between the source and target databases.
- **Workaround**: Ensure that the schema on the target matches schema on the source. For detail on migrating schema, refer to the [Azure PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
+ **Workaround**: Ensure that the schema on the target matches the schema on the source. For more information on migrating the schema, see the [Azure Database for PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
- **Error:** The source database {database} is empty.
- **Limitation**: This error occurs when the source database is empty. It is most likely because you have selected the wrong database as source.
+ **Limitation**: This error occurs when the source database is empty. You probably selected the wrong database as the source.
**Workaround**: Double-check the source database you selected for migration, and then try again. -- **Error:** The target database {database} is empty. Please migrate the schema.
+- **Error:** The target database {database} is empty. Migrate the schema.
+
+ **Limitation**: This error occurs when there's no schema on the target database. Make sure the schema on the target matches the schema on the source.
- **Limitation**: This error occurs when there's no schema on the target database. Make sure schema on the target matches schema on the source.
- **Workaround**: Ensure that the schema on the target matches schema on the source. For detail on migrating schema, refer to the [Azure PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
+ **Workaround**: Ensure that the schema on the target matches the schema on the source. For more information on migrating the schema, see the [Azure Database for PostgreSQL online migration documentation](./tutorial-postgresql-azure-postgresql-online.md#migrate-the-sample-schema).
## Other limitations -- The database name can't include a semi-colon (;).-- A captured table must have a Primary Key. If a table doesn't have a primary key, the result of DELETE and UPDATE record operations will be unpredictable.-- Updating a Primary Key segment is ignored. In such cases, applying such an update will be identified by the target as an update that didn't update any rows and will result in a record written to the exceptions table.-- Migration of multiple tables with the same name but a different case (e.g. table1, TABLE1, and Table1) may cause unpredictable behavior and is therefore not supported.
+- The database name can't include a semicolon (;).
+- A captured table must have a primary key. If a table doesn't have a primary key, the result of DELETE and UPDATE record operations will be unpredictable.
+- Updating a primary key segment is ignored. Applying such an update will be identified by the target as an update that didn't update any rows. The result is a record written to the exceptions table.
+- Migration of multiple tables with the same name but a different case might cause unpredictable behavior and isn't supported. An example is the use of table1, TABLE1, and Table1.
- Change processing of [CREATE | ALTER | DROP | TRUNCATE] table DDLs isn't supported.-- In Azure Database Migration Service, a single migration activity can only accommodate up to four databases.-- Migration of the pg_largeobject table is not supported.
+- In Database Migration Service, a single migration activity can only accommodate up to four databases.
+- Migration of the pg_largeobject table isn't supported.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
These formats are supported in the lists of paths to purge:
- **Root domain purge**: Purge the root of the endpoint with "/" in the path. > [!NOTE]
-> **Purging wildcard domains**: Specifying cached paths for purging as discussed in this section doesn't apply to any wildcard domains that are associated with the Front Door. Currently, we don't support directly purging wildcard domains. You can purge paths from specific subdomains by specifying that specfic subdomain and the purge path. For example, if my Front Door has `*.contoso.com`, I can purge assets of my subdomain `foo.contoso.com` by typing `foo.contoso.com/path/*`. Currently, specifying host names in the purge content path is imited to subdomains of wildcard domains, if applicable.
+> **Purging wildcard domains**: Specifying cached paths for purging as discussed in this section doesn't apply to any wildcard domains that are associated with the Front Door. Currently, we don't support directly purging wildcard domains. You can purge paths from specific subdomains by specifying that specfic subdomain and the purge path. For example, if my Front Door has `*.contoso.com`, I can purge assets of my subdomain `foo.contoso.com` by typing `foo.contoso.com/path/*`. Currently, specifying host names in the purge content path is limited to subdomains of wildcard domains, if applicable.
> Cache purges on the Front Door are case-insensitive. Additionally, they're query string agnostic, meaning purging a URL will purge all query-string variations of it.
hdinsight Apache Ambari Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-ambari-usage.md
Title: Apache Ambari usage in Azure HDInsight
description: Discussion of how Apache Ambari is used in Azure HDInsight. Previously updated : 01/12/2021 Last updated : 04/07/2022 # Apache Ambari usage in Azure HDInsight
hdinsight Apache Domain Joined Run Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-hbase.md
Title: Apache HBase & Enterprise Security Package - Azure HDInsight
description: Tutorial - Learn how to configure Apache Ranger policies for HBase in Azure HDInsight with Enterprise Security Package. Previously updated : 09/04/2019 Last updated : 04/07/2022 # Tutorial: Configure Apache HBase policies in HDInsight with Enterprise Security Package
If you're not going to continue to use this application, delete the HBase cluste
## Next steps > [!div class="nextstepaction"]
-> [Get started with an Apache HBase](../hbase/apache-hbase-tutorial-get-started-linux.md)
+> [Get started with an Apache HBase](../hbase/apache-hbase-tutorial-get-started-linux.md)
hdinsight Apache Hadoop Use Hive Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-visual-studio.md
description: Learn how to use the Data Lake tools for Visual Studio to run Apach
Previously updated : 11/27/2019 Last updated : 04/07/2022 # Run Apache Hive queries using the Data Lake tools for Visual Studio
hdinsight Apache Hadoop Visual Studio Tools Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-visual-studio-tools-get-started.md
keywords: hadoop tools,hive query,visual studio,visual studio hadoop
Previously updated : 04/14/2020 Last updated : 04/07/2022 # Use Data Lake Tools for Visual Studio to connect to Azure HDInsight and run Apache Hive queries
In this article, you learned how to use the Data Lake Tools for Visual Studio pa
* [What is Apache Hive and HiveQL on Azure HDInsight?](hdinsight-use-hive.md) * [Create Apache Hadoop cluster - Template](apache-hadoop-linux-tutorial-get-started.md) * [Submit Apache Hadoop jobs in HDInsight](submit-apache-hadoop-jobs-programmatically.md)
-* [Analyze Twitter data using Apache Hive and Apache Hadoop on HDInsight](../hdinsight-analyze-twitter-data-linux.md)
+* [Analyze Twitter data using Apache Hive and Apache Hadoop on HDInsight](../hdinsight-analyze-twitter-data-linux.md)
hdinsight Hbase Troubleshoot Rest Not Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-rest-not-spending.md
Title: Apache HBase REST not responding to requests in Azure HDInsight
description: Resolve issue with Apache HBase REST not responding to requests in Azure HDInsight. Previously updated : 08/01/2019 Last updated : 04/07/2022 # Scenario: Apache HBase REST not responding to requests in Azure HDInsight
fi
## Next steps
hdinsight Hdinsight Hadoop Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-architecture.md
description: Describes Apache Hadoop storage and processing on Azure HDInsight c
Previously updated : 02/07/2020 Last updated : 04/07/2022 # Apache Hadoop architecture in HDInsight
The `fs.trash.interval` property from **HDFS** > **Advanced core-site** should r
## Next steps * [Use MapReduce in Apache Hadoop on HDInsight](hadoop/hdinsight-use-mapreduce.md)
-* [Introduction to Azure HDInsight](hadoop/apache-hadoop-introduction.md)
+* [Introduction to Azure HDInsight](hadoop/apache-hadoop-introduction.md)
hdinsight Interactive Query Troubleshoot Inaccessible Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-inaccessible-hive-view.md
Title: Apache Hive connections to Apache Zookeeper - Azure HDInsight
description: Apache Hive View inaccessible due to Apache Zookeeper issues in Azure HDInsight Previously updated : 07/30/2019 Last updated : 04/07/2022 # Scenario: Apache Hive fails to establish a connection to Apache Zookeeper in Azure HDInsight
It is possible that Hive may fail to establish a connection to Zookeeper, which
## Next steps
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Previously updated : 03/21/2022 Last updated : 04/07/2022 # Deploy MedTech service in the Azure portal
-In this quickstart, you'll learn how to deploy MedTech service in the Azure portal. Configuring the MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service using an Azure Event Hub for device messages.
+In this quickstart, you'll learn how to deploy MedTech service in the Azure portal. The MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service.
## Prerequisites
-It's important that you have the following prerequisites completed before you begin the steps of creating an MedTech service instance in Azure Health Data Services.
+It's important that you have the following prerequisites completed before you begin the steps of creating a MedTech service instance in Azure Health Data Services.
* [Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Resource group deployed in the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md)
-* [Event Hubs namespace and Event Hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
+* [Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
* [Workspace deployed in Azure Health Data Services](../healthcare-apis-quickstart.md)
-* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
+* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
+
+> [!IMPORTANT]
+> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+>
+> Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see, [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+>
+> Examples:
+>* Two MedTech services accessing the same device message event hub.
+>* A MedTech service and a storage writer application accessing the same device message event hub.
## Deploy MedTech service
Under the **Basics** tab, complete the required fields under **Instance details*
2. Enter the **Event Hub name**.
- The Event Hub name is the name of the **Event Hubs Instance** that you've deployed.
+ The event hub name is the name of the **Event Hubs Instance** that you've deployed.
- For information about Azure Event Hubs, see [Quickstart: Create an Event Hub using Azure portal](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
+ For information about Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
3. Enter the **Consumer Group**.
Under the **Destination** tab, enter the destination properties associated with
**Create**
- The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the Event Hub message. It also attempts to retrieve a patient resource from the FHIR Server using the patient identifier included in the Event Hub message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the Event Hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the IoT Connector destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR Server.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. It also attempts to retrieve a patient resource from the FHIR Server using the patient identifier included in the event hub message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the event hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the IoT Connector destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR Server.
**Lookup**
- The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. If the device resource isn't found, this will cause an error, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR Server before data can be processed.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR service using the device identifier included in the event hub message. If the device resource isn't found, an error will occur, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR Server before data can be processed.
For more information, see the open source documentation [FHIR destination mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
Under the **Tags** tab, enter the tag properties associated with the MedTech ser
![Screenshot of Deployed MedTech service listed in the Azure Recent resources list.](media/azure-resources-iot-connector-deployed.png#lightbox)
- Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the Event Hub and FHIR service.
+ Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the event hub and FHIR service.
## Granting MedTech service access
-To ensure that your MedTech service works properly, it must have granted access permissions to the Event Hub and FHIR service.
+To ensure that your MedTech service works properly, it must have granted access permissions to the event hub and FHIR service.
-### Accessing the MedTech service from the Event Hub
+### Accessing the MedTech service from the event hub
1. In the **Azure Resource group** list, select the name of your **Event Hubs Namespace**.
To ensure that your MedTech service works properly, it must have granted access
![Screenshot of add role assignment required fields.](media/event-hub-add-role-assignment-fields.png#lightbox)
- The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this Event Hub.
+ The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this event hub.
For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
To ensure that your MedTech service works properly, it must have granted access
`<your workspace name>/iotconnectors/<your MedTech service name>`
- When you deploy an MedTech service, it creates a managed identity. The managed identify name is a concatenation of the workspace name, resource type (that's the MedTech service), and the name of the MedTech service.
+ When you deploy a MedTech service, it creates a managed identity. The managed identify name is a concatenation of the workspace name, resource type (that's the MedTech service), and the name of the MedTech service.
7. Select **Save**.
- After the role assignment has been successfully added to the Event Hub, a notification will display a green check mark with the text "Add Role assignment." This message indicates that the MedTech service can now read from the Event Hub.
+ After the role assignment has been successfully added to the event hub, a notification will display a green check mark with the text "Add Role assignment." This message indicates that the MedTech service can now read from the event hub.
![Screenshot of added role assignment message.](media/event-hub-added-role-assignment.png#lightbox)
For more information about authoring access to Event Hubs resources, see [Author
## Next steps
-In this article, you've learned how to deploy an MedTech service in the Azure portal. For an overview of MedTech service, see
+In this article, you've learned how to deploy a MedTech service in the Azure portal. For an overview of MedTech service, see
>[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
You can create a load test by using existing test scripts based on Apache JMeter
Azure Load Testing test engines abstract the required infrastructure for running a high-scale load test. The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. To scale out the load test, you can configure the number of test engines.
-Azure Load Testing uses Apache JMeter version 5.4.1 for running load tests. You can use Apache JMeter plugins that are available on https://jmeter-plugins.org in your test script.
+Azure Load Testing uses Apache JMeter version 5.4.3 for running load tests. You can use Apache JMeter plugins that are available on https://jmeter-plugins.org in your test script.
The application can be hosted anywhere: in Azure, on-premises, or in other clouds. During the load test, the service collects the following resource metrics and displays them in a dashboard:
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
In the body, include the `KeyType` property as either `Primary` or `Secondary`.
### Enable Azure Active Directory Open Authentication (Azure AD OAuth)
-For inbound calls to an endpoint that's created by a request-based trigger, you can enable [Azure AD OAuth](../active-directory/develop/index.yml) by defining or adding an authorization policy for your logic app. This way, inbound calls use OAuth [access tokens](../active-directory/develop/access-tokens.md) for authorization.
+In a Consumption logic app workflow that starts with a request-based trigger, you can authenticate inbound calls sent to the endpoint created by that trigger by enabling [Azure AD OAuth](../active-directory/develop/index.yml). To set up this authentication, [define or add an authorization policy at the logic app level](#enable-azure-ad-inbound). This way, inbound calls use [OAuth access tokens](../active-directory/develop/access-tokens.md) for authorization.
When your logic app receives an inbound request that includes an OAuth access token, Azure Logic Apps compares the token's claims against the claims specified by each authorization policy. If a match exists between the token's claims and all the claims in at least one policy, authorization succeeds for the inbound request. The token can have more claims than the number specified by the authorization policy.
-> [!NOTE]
-> For the **Logic App (Standard)** resource type in single-tenant Azure Logic Apps, Azure AD OAuth is currently
-> unavailable for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
+In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
#### Considerations before you enable Azure AD OAuth
When your logic app receives an inbound request that includes an OAuth access to
} ```
+<a name="enable-azure-ad-inbound"></a>
+ #### Enable Azure AD OAuth for your logic app Follow these steps for either the Azure portal or your Azure Resource Manager template:
mariadb Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-backup.md
Azure Database for MariaDB automatically creates server backups and stores them
Azure Database for MariaDB takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
-These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
+These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MariaDB. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
The backup type and frequency is depending on the backend storage for the servers.
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md
Last updated 7/7/2020
# Overview of business continuity with Azure Database for MariaDB
-This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
+This article describes the capabilities that Azure Database for MariaDB provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
## Features that you can use to provide business continuity
You can use cross region read replicas to enhance your business continuity and d
## FAQ
-### Where does Azure Database for MySQL store customer data?
-By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+### Where does Azure Database for MariaDB store customer data?
+By default, Azure Database for MariaDB doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
## Next steps
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
Now, connect from the appliance to the physical servers to be discovered, and st
- Azure Migrate supports the SSH private key generated by ssh-keygen command using RSA, DSA, ECDSA, and ed25519 algorithms. - Currently Azure Migrate does not support passphrase-based SSH key. Use an SSH key without a passphrase. - Currently Azure Migrate does not support SSH private key file generated by PuTTY.
+ - The SSH key file supports CRLF to mark a line break in the text file that you upload. SSH keys created on Linux systems most commonly have LF as their newline character so you can convert them to CRLF by opening the file in vim, typing `:set textmode` and saving the file.
- Azure Migrate supports OpenSSH format of the SSH private key file as shown below: ![Screenshot of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
## Cluster configuration requirements * All OpenShift Cluster operators must remain in a managed state. The list of cluster operators can be returned by running `oc get clusteroperators`.
-* The cluster must have a minimum of three worker nodes and three manager nodes. Don't have taints that prevent OpenShift components to be scheduled. Don't scale the cluster workers to zero, or attempt a graceful cluster shutdown.
+* The cluster must have a minimum of three worker nodes and three control plane nodes. Don't have taints that prevent OpenShift components to be scheduled. Don't scale the cluster workers to zero, or attempt a graceful cluster shutdown.
* Don't remove or modify the cluster Prometheus and Alertmanager services. * Don't remove Service Alertmanager rules. * Security groups can't be modified. Any attempt to modify security groups will be reverted.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't set any unsupportedConfigOverrides options. Setting these options prevents minor version upgrades. * The Azure Red Hat OpenShift service accesses your cluster via Private Link Service. Don't remove or modify service access. * Non-RHCOS compute nodes aren't supported. For example, you can't use a RHEL compute node.
-* Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don''t require tags on the Azure Red Hat OpenShift RP-managed cluster resource group.
+* Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don't require tags on the Azure Red Hat OpenShift RP-managed cluster resource group.
+* Do not run extra workloads on the control plane nodes. While they can be scheduled on the control plane nodes, it will cause extra resource usage and stability issues that can affect the entire cluster.
## Supported virtual machine sizes
-Azure Red Hat OpenShift 4 supports worker node instances on the following virtual machine sizes:
+Azure Red Hat OpenShift 4 supports node instances on the following virtual machine sizes:
### Control plane nodes
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
The second graph in file-based source types is ***Files not associated with a re
## Next steps Learn more about Azure Purview insight reports with
-[Scan Insights](./scan-insights.md)
+
+- [Classification insights](./classification-insights.md)
+- [Glossary insights](glossary-insights.md)
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
One of the platform features of Azure Purview is the ability to show the lineage
Each system supports a different level of lineage scope. Check the sections below, or your system's individual lineage article, to confirm the scope of lineage currently available. ### Data processing systems
-Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table.
+Data integration and ETL tools can push lineage into Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data processing systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table.
| Data processing system | Supported scope | | - | |
Databases & storage solutions such as Oracle, Teradata, and SAP have query engin
|| [SAP S/4HANA](register-scan-saps4hana-source.md) | ### Data analytics and reporting systems
-Data systems like Azure ML and Power BI report lineage into Azure Purview. These systems will use the datasets from storage systems and process through their meta model to create BI Dashboard, ML experiments and so on.
+Data analytics and reporting systems like Azure ML and Power BI report lineage into Azure Purview. These systems will use the datasets from storage systems and process through their meta model to create BI Dashboards, ML experiments and so on.
| Data analytics & reporting system | Supported scope | | - | |
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
Learn more about Azure Purview insight reports
> [!div class="nextstepaction"] > [Glossary insights](glossary-insights.md)
-> [!div class="nextstepaction"]
-> [Scan insights](scan-insights.md)
- > [!div class="nextstepaction"] > [Sensitivity labeling insights](./sensitivity-insights.md)
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Direct costs impacting Azure Purview pricing are based on the following three di
#### Automated scanning, classification and ingestion There are two major automated processes that can trigger ingestion of metadata into Azure Purview:
-1. Automatic scans using native [connectors](/azure-purview-connector-overview.md). This process includes three main steps:
+1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
- Metadata scan - Automatic classification - Ingestion of metadata into Azure Purview
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-insights.md
The report provides broad insights through graphs and KPIs and later deep dive i
> [!NOTE] > File Extension Insights has been merged into Asset Insights with richer trend report showing growth in data size by file extension. Learn more by exploring [Asset Insights](asset-insights.md)
->
->
-
-## Scan Insights
-
-The report enables Data Source Administrators to understand overall health of the scans - how many succeeded, how many failed, how many canceled. This report gives a status update on scans that have been executed in the Azure Purview account within a time period of last seven days or last 30 days.
-
-The report also allows administrators to deep dive and explore which scans failed and on what specific source types. To further enable users to investigate, the report helps them navigate into the scan history page within the "Sources" experience.
## Glossary Insights
For more information, see [Sensitivity label insights about your data in Azure P
## Next steps
+* [Asset insights](asset-insights.md)
* [Glossary insights](glossary-insights.md)
-* [Scan insights](scan-insights.md)
* [Classification insights](./classification-insights.md)
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
+
+ Title: Monitor scan runs in Azure Purview
+description: This guide describes how to monitor the scan runs in Azure Purview.
+++++ Last updated : 04/04/2022++
+# Monitor scan runs in Azure Purview
+
+In Azure Purview, you can register and scan various types of data sources, and you can view the scan status over time. This article outlines how to monitor and get a bird's eye view of your scan runs in Azure Purview.
+
+> [!IMPORTANT]
+> The monitoring experience is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Monitor scan runs
+
+1. Go to your Azure Purview account -> open **Azure Purview Studio** -> **Data map** -> **Monitoring**.
+
+1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, and canceled scan runs by week or by the day in the graph.
+
+ :::image type="content" source="./media/how-to-monitor-scan-runs/monitor-scan-runs.png" alt-text="View scan runs over time":::
+
+1. At the bottom of the graph, there is a **View more** link for you to explore further. The link opens the **Scan status** page. Here you can see a scan name and the number of times it has succeeded, failed, or been canceled in the time period. You can also filter the list by source types.
+
+ :::image type="content" source="./media/how-to-monitor-scan-runs/view-scan-status.png" alt-text="View scan status in details" lightbox="./media/how-to-monitor-scan-runs/view-scan-status.png":::
+
+1. You can explore a specific scan further by selecting the **scan name**. It connects you to the scan history page, where you can find the list of run IDs with more execution details.
+
+ :::image type="content" source="./media/how-to-monitor-scan-runs/view-scan-history.png" alt-text="View scan history for a given scan" lightbox="./media/how-to-monitor-scan-runs/view-scan-history.png":::
+
+1. You can come back to **Scan Status** page by following the bread crumbs on the top left corner of the run history page.
+
+## Next steps
+
+* [Azure Purview supported data sources and file types](azure-purview-connector-overview.md)
+* [Manage data sources](manage-data-sources.md)
+* [Scan and ingestion](concept-scans-and-ingestion.md)
purview Scan Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scan-insights.md
- Title: Scan insights on your data in Azure Purview
-description: This how-to guide describes how to view and use Azure Purview Insights scan reporting on your data.
----- Previously updated : 09/27/2021--
-# Scan insights on your data in Azure Purview
-
-This how-to guide describes how to access, view, and filter Azure Purview scan insight reports for your data.
-
-> [!IMPORTANT]
-> Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this how-to guide, you'll learn how to:
-
-> [!div class="checklist"]
-> * View insights from your Azure Purview account.
-> * Get a bird's eye view of your scans.
-
-## Prerequisites
-
-Before getting started with Azure Purview insights, make sure that you've completed the following steps:
-
-* Set up your Azure resources and populate the account with data.
-* Set up and complete a scan on the data source.
-
-For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-
-## Use Azure Purview Scan Insights
-
-In Azure Purview, you can register and scan source types. You can view the scan status over time in Scan Insights. The insights tell you how many scans failed, succeeded, or get canceled within a certain time period.
-
-### View scan insights
-
-1. Go to the **Azure Purview** instance screen in the Azure portal and select your Azure Purview account.
-
-1. On the **Overview** page, in the **Get Started** section, select the **Open Azure Purview Studio** tile.
-
- :::image type="content" source="./media/scan-insights/portal-access.png" alt-text="Launch Azure Purview from the Azure portal":::
-
-1. On the Azure Purview **Home** page, select **Insights** on the left menu.
-
- :::image type="content" source="./media/scan-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
-
-1. In the **Insights** area, select **Scans** to display the Azure Purview **Scan insights** report.
-
-### View high-level KPIs to show count of scans by status and deep-dive into each scan
-
-1. The high-level KPIs show total scans run within a period. The time period is defaulted at last 30 days. However, you can select last seven days, as well. Based on the time filter, the KPI values reflect the count of scans appropriately.
--
-1. Based on the time filter value selected, you can see the distribution of successful, failed, and canceled scans by week or by the day in the graph.
-
-1. At the bottom of the graph, there is a **View more** link for you to explore further. The link opens the **Scan Status** page within Scan Insights experience. Here you can see a scan name and the number of times it has succeeded, failed, or been canceled in the last 30 days.
-
- :::image type="content" source="./media/scan-insights/main-graph.png" alt-text="View Scan status over time":::
-
-4. You can explore a specific scan further, by selecting the **scan name** that will connect you to the scan history within the **Data Map** experience of Azure Purview. From the run history page, you can get the run ID that will help in further failure investigation.
-
- :::image type="content" source="./media/scan-insights/scan-status.png" alt-text="View Scan details":::
-
-5. Finally, you can come back to Scan Insights **Scan Status** page by following the bread crumbs on the top left corner of the run history page.
-
- :::image type="content" source="./media/scan-insights/scan-history.png" alt-text="View scan history":::
-
-## Next steps
-
-* Learn more about Azure Purview **Insights** with
-[Data Insights](./concept-insights.md)
-
-* Learn more about Azure Purview's **Data Map** experience with [Manage data sources](./manage-data-sources.md)
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md
For more information, see [Automatically label your data in Azure Purview](creat
Learn more about these Azure Purview insight reports: - [Glossary insights](glossary-insights.md)-- [Scan insights](scan-insights.md) - [Classification insights](./classification-insights.md)
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
Title: Encrypt data using customer-managed keys
-description: Supplement server-side encryption over indexes and synonym maps in Azure Cognitive Search using keys that you create and manage in Azure Key Vault.
+description: Supplement server-side encryption in Azure Cognitive Search using customer managed keys (CMK) or bring your own keys (BYOK) that you create and manage in Azure Key Vault.
Previously updated : 01/25/2022 Last updated : 04/07/2022 # Configure customer-managed keys for data encryption in Azure Cognitive Search
-Azure Cognitive Search automatically encrypts content with [service-managed keys](../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components). If more protection is needed, you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault. Objects that can be encrypted include indexes, synonym lists, indexers, data sources, and skillsets.
+Azure Cognitive Search automatically encrypts data at rest with [service-managed keys](../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components). If more protection is needed, you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault.
-This article walks you through the steps of setting up customer-managed key (CMK) encryption. Here are some points to keep in mind:
+This article walks you through the steps of setting up customer-managed key (CMK) or "bring-your-own-key" (BYOK) encryption. Here are some points to keep in mind:
+++ CMK encryption is enacted on individual objects. If you require CMK unilaterally across your search service, [set an enforcement policy](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchencryptionwithcmk) at the service level so that you can be notified if the service falls out of compliance. + CMK encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault APIs to generate encryption keys. + CMK encryption occurs when an object is created. You can't encrypt objects that already exist.
-Encryption is computationally expensive to decrypt so only sensitive content is encrypted. This includes all content within indexes and synonym lists. For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Cognitive Services keys, and some skills accept user inputs, such as custom entities. In both cases, keys and user inputs into skills are encrypted.
+## CMK-qualified encryption
+
+Objects that can be encrypted include indexes, synonym lists, indexers, data sources, and skillsets. Encryption is computationally expensive to decrypt so only sensitive content is encrypted.
+
+Encryption is performed over the following objects:
+++ All content within indexes and synonym lists, including descriptions.+++ For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Cognitive Services keys, and some skills accept user inputs, such as custom entities. In both cases, keys and user inputs into skills are encrypted. ## Double encryption
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
Consider the following when working with multiple regions:
- Bandwidth costs vary depending on the source and destination region and collection method. For more information, see: - [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/)
- - [Data transfers charges using Log Analytics ](../azure-monitor/logs/manage-cost-storage.md).
+ - [Data transfers charges using Log Analytics ](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
- Use templates for your analytics rules, custom queries, workbooks, and other resources to make your deployments more efficient. Deploy the templates instead of manually deploying each resource in each region. -- Connectors that are based on diagnostics settings do not incur in-bandwidth costs. For more information, see [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md#data-transfer-charges-using-log-analytics).
+- Connectors that are based on diagnostics settings do not incur in-bandwidth costs. For more information, see [Data transfers charges using Log Analytics](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
For example, if you decide to collect logs from Virtual Machines in East US and send them to a Microsoft Sentinel workspace in West US, you'll be charged ingress costs for the data transfer. Since the Log Analytics agent compresses the data in transit, the size charged for the bandwidth may be lower than the size of the logs in Microsoft Sentinel.
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
You could also apply further controls. For example, to view only the costs assoc
Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
+The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Change pricing tier for Log Analytics workspace](../azure-monitor/logs/change-pricing-tier.md).
For more information, see [Create budgets](#create-budgets) and [Reduce costs in Microsoft Sentinel](billing-monitor-costs.md).
To define a daily volume cap, select **Usage and estimated costs** in the left n
The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
-The daily cap doesn't limit collection of all data types. Security data is excluded from the cap. For more information about managing the daily cap in Log Analytics, see [Manage your maximum daily data volume](../azure-monitor/logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume).
+The daily cap doesn't limit collection of all data types. Security data is excluded from the cap. For more information about managing the daily cap in Log Analytics, see [Set daily cap on Log Analytics workspace](../azure-monitor/logs/daily-cap.md).
## Next steps
The daily cap doesn't limit collection of all data types. Security data is exclu
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
To change your pricing tier commitment, select one of the other tiers on the pri
Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
+The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Change pricing tier](../azure-monitor/logs/change-pricing-tier.md).
## Separate non-security data in a different workspace
Here are some other considerations for moving to a dedicated cluster for cost op
- Moving a cluster to another resource group or subscription isn't currently supported. - A workspace link to a cluster fails if the workspace is linked to another cluster.
-For more information about dedicated clusters, see [Log Analytics dedicated clusters](../azure-monitor/logs/manage-cost-storage.md#log-analytics-dedicated-clusters).
+For more information about dedicated clusters, see [Log Analytics dedicated clusters](../azure-monitor/logs/cost-logs.md#dedicated-clusters).
## Reduce long-term data retention costs with Azure Data Explorer or archived logs (preview)
Besides for the predefined sets of events that you can select to ingest, such as
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
If you're billed at Pay-As-You-Go rate, the following table shows how Microsoft
#### [Free data meters](#tab/free-data-meters)
-The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services. For more information, see [Viewing Data Allocation Benefits](../azure-monitor/logs/manage-cost-storage.md#viewing-data-allocation-benefits).
+The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services. For more information, see [Viewing Data Allocation Benefits](../azure-monitor/usage-estimated-costs.md#viewing-data-allocation-benefits).
Cost description | Service name | Meter | |--|--|--|
Any other services you use could have associated costs.
After you enable Microsoft Sentinel on a Log Analytics workspace, you can retain all data ingested into the workspace at no charge for the first 90 days. Retention beyond 90 days is charged per the standard [Log Analytics retention prices](https://azure.microsoft.com/pricing/details/monitor/).
-You can specify different retention settings for individual data types. For more information, see [Retention by data type](../azure-monitor/logs/manage-cost-storage.md#retention-by-data-type). You can also enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md). Archived logs are in public preview.
+You can specify different retention settings for individual data types. For more information, see [Retention by data type](../azure-monitor/logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). You can also enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md). Archived logs are in public preview.
The 90 day retention doesn't apply to basic logs. If you want to extend data retention for basic logs beyond eight days, you can store that data in archived logs for up to seven years.
Data connectors listed as public preview don't generate cost. Data connectors ge
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Collaborate In Microsoft Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/collaborate-in-microsoft-teams.md
Title: Collaborate in Microsoft Teams with a Microsoft Sentinel incident team | Microsoft Docs description: Learn how to connect to Microsoft Teams from Microsoft Sentinel to collaborate with others on your team using Microsoft Sentinel data.-+ Previously updated : 11/09/2021- Last updated : 03/30/2022+
Investigate together with an *incident team* by integrating Microsoft Teams dire
1. In Microsoft Sentinel, in the **Threat management** > **Incidents** grid, select the incident you're currently investigating.
-1. At the bottom of the incident pane that appears on the right, select **Actions** > **Create team**.
+1. At the bottom of the incident pane that appears on the right, select **Actions** > **Create team (Preview)**.
[ ![Create a team to collaborate in a incident team.](media/collaborate-in-microsoft-teams/create-team.png) ](media/collaborate-in-microsoft-teams/create-team.png#lightbox)
- The **New team** pane opens on the right. Define the following settings for your incident team:
+ The **Incident team** pane opens on the right. Define the following settings for your incident team:
- **Team name**: Automatically defined as the name of your incident. Modify the name as needed so that it's easily identifiable to you.
- - **Description**: Enter a meaningful description for your incident team.
- - **Add groups**: Select one or more Azure AD groups to add to your incident team. Individual users aren't supported in this page. If you need to add individual users, [do so in Microsoft Teams](#more-users) after you've created the team.
+ - **Team description**: Enter a meaningful description for your incident team.
+ - **Add groups and members**: Select one or more Azure AD users and/or groups to add to your incident team. As you select users and groups, they will appear in the **Selected groups and users:** list below the **Add groups and members** list.
> [!TIP]
- > If you regularly work with the same teams, you may want to select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: to save them as favorites.
+ > If you regularly work with the same users and groups, you may want to select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: next to each one in the **Selected groups and users** list to save them as favorites.
>
- > Favorites are automatically selected the next time you create a team. If you want to remove it from the next team you create, either select **Delete** :::image type="icon" source="media/collaborate-in-microsoft-teams/delete-user-group.png" border="false":::, or select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: again to remove the team from your favorites altogether.
+ > Favorites are automatically selected the next time you create a team. If you want to remove a favorite from the next team you create, either select **Delete** :::image type="icon" source="media/collaborate-in-microsoft-teams/delete-user-group.png" border="false":::, or select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: again to remove the team from your favorites altogether.
>
-1. When you're done adding groups, select **Create** to create your incident team.
+1. When you're done adding users and groups, select **Create team** to create your incident team.
The incident pane refreshes, with a link to your new incident team under the **Team name** title.
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
Before working through the decision tree, make sure you have the following infor
|**Regulatory requirements related to Azure data residency** | Microsoft Sentinel can run on workspaces in most, but not all regions [supported in GA for Log Analytics](https://azure.microsoft.com/global-infrastructure/services/?products=monitor). Newly supported Log Analytics regions may take some time to onboard the Microsoft Sentinel service. <br><br> Data generated by Microsoft Sentinel, such as incidents, bookmarks, and analytics rules, may contain some customer data sourced from the customer's Log Analytics workspaces.<br><br> For more information, see [Geographical availability and data residency](quickstart-onboard.md#geographical-availability-and-data-residency).| |**Data sources** | Find out which [data sources](connect-data-sources.md) you need to connect, including built-in connectors to both Microsoft and non-Microsoft solutions. You can also use Common Event Format (CEF), Syslog or REST-API to connect your data sources with Microsoft Sentinel. <br><br>If you have Azure VMs in multiple Azure locations that you need to collect the logs from and the saving on data egress cost is important to you, you need to calculate the data egress cost using [Bandwidth pricing calculator](https://azure.microsoft.com/pricing/details/bandwidth/#overview) for each Azure location. | |**User roles and data access levels/permissions** | Microsoft Sentinel uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. <br><br>All Microsoft Sentinel built-in roles grant read access to the data in your Microsoft Sentinel workspace. Therefore, you need to find out whether there is a need to control data access per data source or row-level as that will impact the workspace design decision. For more information, see [Custom roles and advanced Azure RBAC](roles.md#custom-roles-and-advanced-azure-rbac). |
-|**Daily ingestion rate** | The daily ingestion rate, usually in GB/day, is one of the key factors in cost management and planning considerations and workspace design for Microsoft Sentinel. <br><br>In most cloud and hybrid environments, networking devices, such as firewalls or proxies, and Windows and Linux servers produce the most ingested data. To obtain the most accurate results, Microsoft recommends an exhaustive inventory of data sources. <br><br>Alternatively, the Microsoft Sentinel [cost calculator](https://cloudpartners.transform.microsoft.com/download?assetname=assets%2FAzure_Sentinel_Calculator.xlsx&download=1) includes tables useful in estimating footprints of data sources. <br><br>**Important**: These estimates are a starting point, and log verbosity settings and workload will produce variances. We recommend that you monitor your system regularly to track any changes. Regular monitoring is recommended based on your scenario. <br><br>For more information, see [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). |
+|**Daily ingestion rate** | The daily ingestion rate, usually in GB/day, is one of the key factors in cost management and planning considerations and workspace design for Microsoft Sentinel. <br><br>In most cloud and hybrid environments, networking devices, such as firewalls or proxies, and Windows and Linux servers produce the most ingested data. To obtain the most accurate results, Microsoft recommends an exhaustive inventory of data sources. <br><br>Alternatively, the Microsoft Sentinel [cost calculator](https://cloudpartners.transform.microsoft.com/download?assetname=assets%2FAzure_Sentinel_Calculator.xlsx&download=1) includes tables useful in estimating footprints of data sources. <br><br>**Important**: These estimates are a starting point, and log verbosity settings and workload will produce variances. We recommend that you monitor your system regularly to track any changes. Regular monitoring is recommended based on your scenario. <br><br>For more information, see [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md). |
## Decision tree
sentinel Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-cases.md
Title: Investigate incidents with Microsoft Sentinel| Microsoft Docs
description: In this article, learn how to use Microsoft Sentinel to create advanced alert rules that generate incidents you can assign and investigate. Previously updated : 01/30/2022 Last updated : 03/30/2022
An incident can include multiple alerts. It's an aggregation of all the relevant
1. Select **Incidents**. The **Incidents** page lets you know how many incidents you have, how many are open, how many you've set to **In progress**, and how many are closed. For each incident, you can see the time it occurred, and the status of the incident. Look at the severity to decide which incidents to handle first.
- ![View incident severity](media/tutorial-investigate-cases/incident-severity.png)
+ ![View incident severity](media/investigate-cases/incident-severity.png)
1. You can filter the incidents as needed, for example by status or severity. For more information, see [Search for incidents](#search-for-incidents).
An incident can include multiple alerts. It's an aggregation of all the relevant
1. To view more details about the alerts and entities in the incident, select **View full details** in the incident page and review the relevant tabs that summarize the incident information.
- ![View alert details](media/tutorial-investigate-cases/incident-timeline.png)
+ ![View alert details](media/investigate-cases/incident-timeline.png)
For example:
An incident can include multiple alerts. It's an aggregation of all the relevant
1. If you're actively investigating an incident, it's a good idea to set the incident's status to **In progress** until you close it.
-1. Incidents can be assigned to a specific user. For each incident you can assign an owner, by setting the **Incident owner** field. All incidents start as unassigned. You can also add comments so that other analysts will be able to understand what you investigated and what your concerns are around the incident.
+1. Incidents can be assigned to a specific user or to a group. For each incident you can assign an owner, by setting the **Owner** field. All incidents start as unassigned. You can also add comments so that other analysts will be able to understand what you investigated and what your concerns are around the incident.
- ![Assign incident to user](media/tutorial-investigate-cases/assign-incident-to-user.png)
+ ![Assign incident to user](media/investigate-cases/assign-incident-to-user.png)
+
+ Recently selected users and groups will appear at the top of the pictured drop-down list.
1. Select **Investigate** to view the investigation map.
To use the investigation graph:
1. Select an incident, then select **Investigate**. This takes you to the investigation graph. The graph provides an illustrative map of the entities directly connected to the alert and each resource connected further.
- [ ![View map.](media/tutorial-investigate-cases/investigation-map.png) ](media/tutorial-investigate-cases/investigation-map.png#lightbox)
+ [ ![View map.](media/investigate-cases/investigation-map.png) ](media/investigate-cases/investigation-map.png#lightbox)
> [!IMPORTANT] > - You'll only be able to investigate the incident if you used the entity mapping fields when you set up your analytics rule. The investigation graph requires that your original incident includes entities.
To use the investigation graph:
1. Select an entity to open the **Entities** pane so you can review information on that entity.
- ![View entities in map](media/tutorial-investigate-cases/map-entities.png)
+ ![View entities in map](media/investigate-cases/map-entities.png)
1. Expand your investigation by hovering over each entity to reveal a list of questions that was designed by our security experts and analysts per entity type to deepen your investigation. We call these options **exploration queries**.
- ![Explore more details](media/tutorial-investigate-cases/exploration-cases.png)
+ ![Explore more details](media/investigate-cases/exploration-cases.png)
For example, on a computer you can request related alerts. If you select an exploration query, the resulting entitles are added back to the graph. In this example, selecting **Related alerts** returned the following alerts into the graph:
- ![View related alerts](media/tutorial-investigate-cases/related-alerts.png)
+ ![View related alerts](media/investigate-cases/related-alerts.png)
1. For each exploration query, you can select the option to open the raw event results and the query used in Log Analytics, by selecting **Events\>**. 1. In order to understand the incident, the graph gives you a parallel timeline.
- ![View timeline in map](media/tutorial-investigate-cases/map-timeline.png)
+ ![View timeline in map](media/investigate-cases/map-timeline.png)
1. Hover over the timeline to see which things on the graph occurred at what point in time.
- ![Use timeline in map to investigate alerts](media/tutorial-investigate-cases/use-timeline.png)
+ ![Use timeline in map to investigate alerts](media/investigate-cases/use-timeline.png)
## Comment on incidents
Another important thing that you can do with comments is enrich your incidents a
Comments are simple to use. You access them through the **Comments** tab on the incident details page. ### Frequently asked questions
Once you have resolved a particular incident (for example, when your investigati
- False Positive - incorrect data - Undetermined For more information about false positives and benign positives, see [Handle false positives in Microsoft Sentinel](false-positives.md). After choosing the appropriate classification, add some descriptive text in the **Comment** field. This will be useful in the event you need to refer back to this incident. Click **Apply** when youΓÇÖre done, and the incident will be closed. ## Search for incidents
To modify the search parameters, select the **Search** button and then select th
For example: By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. In the search pane, scroll down the list to select one or more other parameters to search, and select **Apply** to update the search parameters. Select **Set to default** reset the selected parameters to the default option.
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
To get data connector health data from the *SentinelHealth* data table, you must
Once the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for your data connectors. > [!TIP]
-> To configure the retention time for your health events, see the [Log Analytics retention configuration documentation](../azure-monitor/logs/manage-cost-storage.md).
+> To configure the retention time for your health events, see the [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
> > [!IMPORTANT]
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
# The Advanced Security Information Model (ASIM) Network Session normalization schema reference (Public preview)
-The Microsoft Sentinel Network Session normalization schema is used to describe an IP network activity. Network connections and network sessions are included. Such events are reported, for example, by operating systems, routers, firewalls, intrusion prevention systems, and web security gateways.
+The Microsoft Sentinel Network Session normalization schema represents an IP network activity, such as network connections and network sessions. Such events are reported, for example, by operating systems, routers, firewalls, and intrusion prevention systems.
The network normalization schema can represent any type of an IP network session but is designed to provide support for common source types, such as Netflow, firewalls, and intrusion prevention systems.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-|
-| <a name="dst"></a>**Dst** | Recommended | String | A unique identifier of the server receiving the DNS request. <br><br>This field might alias the [DstDvcId](#dstdvcid), [DstHostname](#dsthostname), or [DstIpAddr](#dstipaddr) fields. <br><br>Example: `192.168.12.1` |
+| <a name="dst"></a>**Dst** | Recommended | Alias | A unique identifier of the server receiving the DNS request. <br><br>This field might alias the [DstDvcId](#dstdvcid), [DstHostname](#dsthostname), or [DstIpAddr](#dstipaddr) fields. <br><br>Example: `192.168.12.1` |
|<a name="dstipaddr"></a> **DstIpAddr** | Recommended | IP address | The IP address of the connection or session destination. If the session uses network address translation, this is the publicly visible address, and not the original address of the source which is stored in [DstNatIpAddr](#dstnatipaddr)<br><br>Example: `2001:db8::ff00:42:8329`<br><br>**Note**: This value is mandatory if [DstHostname](#dsthostname) is specified. | | <a name="dstportnumber"></a>**DstPortNumber** | Optional | Integer | The destination IP port.<br><br>Example: `443` | | <a name="dsthostname"></a>**DstHostname** | Recommended | Hostname | The destination device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-|
-| <a name="src"></a>**Src** | Recommended | String | A unique identifier of the source device. <br><br>This field might alias the [SrcDvcId](#srcdvcid), [SrcHostname](#srchostname), or [SrcIpAddr](#srcipaddr) fields. <br><br>Example: `192.168.12.1` |
+| <a name="src"></a>**Src** | Recommended | Alias | A unique identifier of the source device. <br><br>This field might alias the [SrcDvcId](#srcdvcid), [SrcHostname](#srchostname), or [SrcIpAddr](#srcipaddr) fields. <br><br>Example: `192.168.12.1` |
| <a name="srcipaddr"></a>**SrcIpAddr** | Recommended | IP address | The IP address from which the connection or session originated. This value is mandatory if **SrcHostname** is specified. If the session uses network address translation, this is the publicly visible address, and not the original address of the source which is stored in [SrcNatIpAddr](#srcnatipaddr)<br><br>Example: `77.138.103.108` | | **SrcPortNumber** | Optional | Integer | The IP port from which the connection originated. Might not be relevant for a session comprising multiple connections.<br><br>Example: `2335` | | <a name="srchostname"></a> **SrcHostname** | Recommended | Hostname | The source device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcDvcIdType** | Optional | DvcIdType | The type of [SrcDvcId](#srcdvcid). For a list of allowed values and further information refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). | | **SrcZone** | Optional | String | The network zone of the source, as defined by the reporting device.<br><br>Example: `Internet` |
-| **SrcIntefaceName** | Optional | String | The network interface used for the connection or session by the source device. <br><br>Example: `eth01` |
+| **SrcInterfaceName** | Optional | String | The network interface used for the connection or session by the source device. <br><br>Example: `eth01` |
| **SrcInterfaceGuid** | Optional | String | The GUID of the network interface used on the source device.<br><br>Example:<br>`46ad544b-eaf0-47ef-`<br>`827c-266030f545a6` | | **SrcMacAddr** | Optional | String | The MAC address of the network interface from which the connection or session originated.<br><br>Example: `06:10:9f:eb:8f:14` | | <a name="srcvlanid"></a>**SrcVlanId** | Optional | String | The VLAN ID related to the source device.<br><br>Example: `130` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcusername"></a>**SrcUsername** | Optional | String | The source username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other username formats are available, store them in the fields `SrcUsername<UsernameType>`.<br><br>Example: `AlbertE` | | <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` | | **SrcUserType** | Optional | UserType | The type of source user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. |
-| <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
+| <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting decice. |
### Source application fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="srcappname"></a>**SrcAppName** | Optional | String | The name of the source application. <br><br>Example: `filezilla.exe` |
-| <a name="srcappid"></a>**SrcAppId** | Optional | String | The ID of the destination application, as reported by the reporting device.<br><br>Example: `124` |
+| <a name="srcappid"></a>**SrcAppId** | Optional | String | The ID of the source application, as reported by the reporting device.<br><br>Example: `124` |
| **SrcAppType** | Optional | AppType | The type of the source application. For a list of allowed values and further information refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [SrcAppName](#srcappname) or [SrcAppId](#srcappid) are used. |
The following fields are used to represent that inspection which a security devi
| | | | | | **NetworkRuleName** | Optional | String | The name or ID of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br> Example: `AnyAnyDrop` | | **NetworkRuleNumber** | Optional | Integer | The number of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br>Example: `23` |
-| **Rule** | Mandatory | String | Either `NetworkRuleName` or `NetworkRuleNumber`. |
+| **Rule** | Mandatory | Alias | Either `NetworkRuleName` or `NetworkRuleNumber`. |
| **ThreatId** | Optional | String | The ID of the threat or malware identified in the network session.<br><br>Example: `Tr.124` | | **ThreatName** | Optional | String | The name of the threat or malware identified in the network session.<br><br>Example: `EICAR Test File` | | **ThreatCategory** | Optional | String | The category of the threat or malware identified in the network session.<br><br>Example: `Trojan` |
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The following fields are defined by ASIM for all schemas:
| <a name="eventresultdetails"></a>**EventResultDetails** | Mandatory | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Each schema documents the list of values valid for this field. The original, source specific, value is stored in the [EventOriginalResultDetails](#eventoriginalresultdetails) field.<br><br>Example: `NXDOMAIN`| | <a name="eventoriginaluid"></a>**EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source.<br><br>Example: `69f37748-ddcd-4331-bf0f-b137f1ea83b`| | <a name="eventoriginaltype"></a>**EventOriginalType** | Optional | String | The original event type or ID, if provided by the source. For example, this field will be used to store the original Windows event ID. This value is used to derive [EventType](#eventtype), which should have only one of the values documented for each schema.<br><br>Example: `4624`|
-| <a name="eventoriginalsubtype"></a>**EventOriginalSubType** | Optional | String | The original event sub type or ID, if provided by the source. For example, this field will be used to store the original Windows logon type. This value is used to derive [EventSubType](#eventsubtype), which should have only one of the values documented for each schema.<br><br>Example: `2`|
+| <a name="eventoriginalsubtype"></a>**EventOriginalSubType** | Optional | String | The original event subtype or ID, if provided by the source. For example, this field will be used to store the original Windows logon type. This value is used to derive [EventSubType](#eventsubtype), which should have only one of the values documented for each schema.<br><br>Example: `2`|
| <a name="eventoriginalresultdetails"></a>**EventOriginalResultDetails** | Optional | String | The original result details provided by the source. This value is used to derive [EventResultDetails](#eventresultdetails), which should have only one of the values documented for each schema. | | <a name="eventseverity"></a>**EventSeverity** | Recommended | Enumerated | The severity of the event. Valid values are: `Informational`, `Low`, `Medium`, or `High`. |
-| <a name="eventoriginalseverity"></a>**EventOriginalSeverity** | Optional | String | The original severity as provided by the source. This value is used to derive [EventSeverity](#eventseverity). |
+| <a name="eventoriginalseverity"></a>**EventOriginalSeverity** | Optional | String | The original severity as provided by the reporting device. This value is used to derive [EventSeverity](#eventseverity). |
| <a name="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. The value should be one of the values listed in [Vendors and Products](#vendors-and-products).<br><br>Example: `Sysmon` | | <a name="eventproductversion"></a>**EventProductVersion** | Optional | String | The version of the product generating the event. <br><br>Example: `12.1` | | <a name="eventvendor"></a>**EventVendor** | Mandatory | String | The vendor of the product generating the event. The value should be one of the values listed in [Vendors and Products](#vendors-and-products).<br><br>Example: `Microsoft` <br><br> |
The following fields are defined by ASIM for all schemas:
| <a name="dvcfqdn"></a>**DvcFQDN** | Optional | String | The hostname of the device on which the event occurred or which reported the event, depending on the schema. <br><br> Example: `Contoso\DESKTOP-1282V4D`<br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DvcDomainType](#dvcdomaintype) field reflects the format used. | | <a name ="dvcid"></a>**DvcId** | Optional | String | The unique ID of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `41502da5-21b7-48ec-81c9-baeea8d7d669` | | <a name="dvcidtype"></a>**DvcIdType** | Optional | Enumerated | The type of [DvcId](#dvcid). For a list of allowed values and further information refer to [DvcIdType](#dvcidtype).<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list, and store the others by using the field names **DvcAzureResourceId** and **DvcMDEid**, respectively.<br><br>**Note**: This field is required if the [DvcId](#dvcid) field is used. |
-| <a name="dvcmacaddr"></a>**DvcMacAddr** | Optional | MAC | The MAC address of the device on which the event occurred. <br><br>Example: `00:1B:44:11:3A:B7` |
+| <a name="dvcmacaddr"></a>**DvcMacAddr** | Optional | MAC | The MAC address of the device on which the event occurred or which reported the event. <br><br>Example: `00:1B:44:11:3A:B7` |
| <a name="dvczone"></a>**DvcZone** | Optional | String | The network on which the event occurred or which reported the event, depending on the schema. The zone is defined by the reporting device.<br><br>Example: `Dmz` |
-| <a name="dvcos"></a>**DvcOs** | Optional | String | The operating system running on the device on which the event occurred. <br><br>Example: `Windows` |
-| <a name="dvcosversion"></a>**DvcOsVersion** | Optional | String | The version of the operating system on the device on which the event occurred. <br><br>Example: `10` |
+| <a name="dvcos"></a>**DvcOs** | Optional | String | The operating system running on the device on which the event occurred or which reported the event. <br><br>Example: `Windows` |
+| <a name="dvcosversion"></a>**DvcOsVersion** | Optional | String | The version of the operating system on the device on which the event occurred or which reported the event. <br><br>Example: `10` |
| <a name="dvcaction"></a>**DvcAction** | Optional | String | For reporting security systems, the action taken by the system, if applicable. <br><br>Example: `Blocked` | | <a name="dvcoriginalaction"></a>**DvcOriginalAction** | Optional | String | The original [DvcAction](#dvcaction) as provided by the reporting device. | | <a name="dvcinterface"></a>**DvcInterface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity which is captured by an intermediate or tap device. |
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
While Workbooks are displayed differently in Microsoft Sentinel, it may be usefu
To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses [analytics to correlate alerts into incidents](detect-threats-built-in.md). **Incidents** are groups of related alerts that together create an actionable possible-threat that you can investigate and resolve. Use the built-in correlation rules as-is, or use them as a starting point to build your own. Microsoft Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources. These analytics connect the dots, by combining low fidelity alerts about different entities into potential high-fidelity security incidents.
-![Incidents](./media/tutorial-investigate-cases/incident-severity.png)
+![Incidents](./media/investigate-cases/incident-severity.png)
## Security automation & orchestration
For example, if you use the ServiceNow ticketing system, you can use the tools p
Currently in preview, Microsoft Sentinel [deep investigation](investigate-cases.md) tools help you to understand the scope and find the root cause, of a potential security threat. You can choose an entity on the interactive graph to ask interesting questions for a specific entity, and drill down into that entity and its connections to get to the root cause of the threat.
-![Investigation](./media/tutorial-investigate-cases/map-timeline.png)
+![Investigation](./media/investigate-cases/map-timeline.png)
## Hunting
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/quickstart-onboard.md
After you connect your data sources, choose from a gallery of expertly created w
- **Log Analytics workspace**. Learn how to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md).
- By default, you may have a default of [30 days retention](../azure-monitor/logs/manage-cost-storage.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use the full extent of Microsoft Sentinel functionality, raise this to 90 days. For more information, see [Change the retention period](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period).
+ By default, you may have a default of [30 days retention](../azure-monitor/logs/cost-logs.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use the full extent of Microsoft Sentinel functionality, raise this to 90 days. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
- **Permissions**:
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
Last updated 02/22/2022
Watchlists in Microsoft Sentinel allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist with a list of high value assets, terminated employees, or service accounts in your environment.
-Upload a watchlist file from a local folder or from your Azure Storage account. To create a watchlist file, you have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
+Upload a watchlist file from a local folder or from your Azure Storage account. To create a watchlist file, you have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
Local file uploads are currently limited to files of up to 3.8 MB in size. If you have large watchlist file that's up to 500 MB in size, upload the file to your Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md).
+When you create a watchlist, the watchlist name and alias must each be between 3 and 64 characters. The first and last characters must be alphanumeric. But you can include whitespaces, hyphens, and underscores in between the first and last characters.
+ > [!IMPORTANT] > The features for watchlist templates and the ability to create a watchlist from a file in Azure Storage are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-
+> >
## Upload a watchlist from a local folder
-You have two ways to upload a CSV file from your local machine to create a watchlist.
+You have two ways to upload a CSV file from your local machine to create a watchlist.
- For a watchlist file you created without a watchlist template: Select **Add new** and enter the required information. - For a watchlist file created from a template downloaded from Microsoft Sentinel: Go to the watchlist **Templates (Preview)** tab. Select the option **Create from template**. Azure pre-populates the name, description, and watchlist alias for you. ### Upload watchlist from a file you created
-If you didn't use a watchlist template to create your file,
+If you didn't use a watchlist template to create your file,
1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.+ 1. Under **Configuration**, select **Watchlist**.+ 1. Select **+ Add new**.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
1. On the **General** page, provide the name, description, and alias for the watchlist.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general-country.png" alt-text="Screenshot of watchlist general tab in the watchlists wizard.":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general-country.png" alt-text="Screenshot of watchlist general tab in the watchlists wizard.":::
1. Select **Next: Source**.
-1. Use the information in the following table to upload your watchlist data.
+1. Use the information in the following table to upload your watchlist data.
- |Field |Description |
- |||
- |Select a type for the dataset | CSV file with a header (.csv) |
- |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
- |Upload file | Either drag and drop your data file, or select **Browse for files** and select the file to upload. |
- |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+ |Field |Description |
+ |||
+ |Select a type for the dataset | CSV file with a header (.csv) |
+ |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
+ |Upload file | Either drag and drop your data file, or select **Browse for files** and select the file to upload. |
+ |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+
1. Select **Next: Review and Create**.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-source.png" alt-text="Screenshot of the watchlist source tab." lightbox="./media/watchlists-create/sentinel-watchlist-source.png":::
-
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-source.png" alt-text="Screenshot of the watchlist source tab." lightbox="./media/watchlists-create/sentinel-watchlist-source.png":::
1. Review the information, verify that it's correct, wait for the **Validation passed** message, and then select **Create**.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-review.png" alt-text="Screenshot of the watchlist review page.":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-review.png" alt-text="Screenshot of the watchlist review page.":::
- A notification appears once the watchlist is created.
+ A notification appears once the watchlist is created.
It might take several minutes for the watchlist to be created and the new data to be available in queries.
It might take several minutes for the watchlist to be created and the new data t
To create the watchlist from a template you populated, 1. From appropriate workspace in Microsoft Sentinel, select **Watchlist**.+ 1. Select the tab **Templates (Preview)**.+ 1. Select the appropriate template from the list to view details of the template in the right pane.+ 1. Select **Create from template**.
- :::image type="content" source="./media/watchlists-create/create-watchlist-from-template.png" alt-text="Screenshot of the option to create a watchlist from a built-in template." lightbox="./media/watchlists-create/create-watchlist-from-template.png":::
+ :::image type="content" source="./media/watchlists-create/create-watchlist-from-template.png" alt-text="Screenshot of the option to create a watchlist from a built-in template." lightbox="./media/watchlists-create/create-watchlist-from-template.png":::
1. On the **General** tab, notice that the **Name**, **Description**, and **Watchlist Alias** fields are all read-only.+ 1. On the **Source** tab, select **Browse for files** and select the file you created from the template.+ 1. Select **Next: Review and Create** > **Create**.+ 1. Watch for an Azure notification to appear when the watchlist is created. It might take several minutes for the watchlist to be created and the new data to be available in queries.
It might take several minutes for the watchlist to be created and the new data t
If you have a large watchlist up to 500 MB in size, upload your watchlist file to your Azure Storage account. Then create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data. A shared access signature URL is an URI that contains both the resource URI and shared access signature token of a resource like a csv file in your storage account. Finally, add the watchlist to your workspace in Microsoft Sentinel.
-For more information about shared access signatures, see [Azure Storage shared access signature token](../storage/common/storage-sas-overview.md#sas-token).
+For more information about shared access signatures, see [Azure Storage shared access signature token](../storage/common/storage-sas-overview.md#sas-token).
### Step 1: Upload a watchlist file to Azure Storage
Upload files and directories to Blob storage by using the AzCopy v10 command-lin
https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name> ```
-2. Next, run the following command to upload the file.
+1. Next, run the following command to upload the file.
```azcopy azcopy copy '<local-file-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-name>'
If you don't use AzCopy, upload your file by using the Azure portal. Go to your
### Step 2: Create shared access signature URL
-Create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data.
+Create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data.
-1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-for-blobs-in-the-azure-portal).
+1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-for-blobs-in-the-azure-portal).
1. Set the shared access signature token expiry time to be at minimum 6 hours. 1. Copy the value for **Blob SAS URL**. ### Step 3: Add the watchlist to a workspace 1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.+ 1. Under **Configuration**, select **Watchlist**.+ 1. Select **+ Add new**.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of the add watchlist on the watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of the add watchlist on the watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
1. On the **General** page, provide the name, description, and alias for the watchlist.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general.png" alt-text="Screenshot of the watchlist general tab with name, description, and watchlist alias fields.":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general.png" alt-text="Screenshot of the watchlist general tab with name, description, and watchlist alias fields.":::
1. Select **Next: Source**.
-1. Use the information in the following table to upload your watchlist data.
- |Field |Description |
- |||
- |Source type | Azure Storage (preview) |
- |Select a type for the dataset | CSV file with a header (.csv) |
- |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
- |Blob SAS URL (Preview) | Paste in the shared access URL you created. |
- |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+1. Use the information in the following table to upload your watchlist data.
- After you enter all the information, your page will look similar to following image.
+ |Field |Description |
+ |||
+ |Source type | Azure Storage (preview) |
+ |Select a type for the dataset | CSV file with a header (.csv) |
+ |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
+ |Blob SAS URL (Preview) | Paste in the shared access URL you created. |
+ |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+
+ After you enter all the information, your page will look similar to following image.
- :::image type="content" source="./media/watchlists-create/watchlist-source-azure-storage.png" alt-text="Screenshot of the watchlist source page with sample values entered." lightbox="./media/watchlists-create/watchlist-source-azure-storage.png":::
+ :::image type="content" source="./media/watchlists-create/watchlist-source-azure-storage.png" alt-text="Screenshot of the watchlist source page with sample values entered." lightbox="./media/watchlists-create/watchlist-source-azure-storage.png":::
1. Select **Next: Review and Create**.+ 1. Review the information, verify that it's correct, wait for the **Validation passed** message.+ 1. Select **Create**. It might take a while for a large watchlist to be created and the new data to be available in queries.
It might take a while for a large watchlist to be created and the new data to be
View the status by selecting the watchlist in your workspace. 1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.+ 1. Under **Configuration**, select **Watchlist**.+ 1. On the **My Watchlists** tab, select the watchlist.+ 1. On the details page, review the **Status (Preview)**.
- :::image type="content" source="./media/watchlists-create/view-status-uploading.png" alt-text="Screenshot that shows the upload status on the watchlist." lightbox="./media/watchlists-create/view-status-uploading.png":::
+ :::image type="content" source="./media/watchlists-create/view-status-uploading.png" alt-text="Screenshot that shows the upload status on the watchlist." lightbox="./media/watchlists-create/view-status-uploading.png":::
1. When the status is **Succeeded**, select **View in Log Analytics** to use the watchlist in a query. It might take several minutes for the watchlist to show in Log Analytics.
Each built-in watchlist template has its own set of data listed in the CSV file
To download one of the watchlist templates, 1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.+ 1. Under **Configuration**, select **Watchlist**.+ 1. Select the tab **Templates (Preview)**.+ 1. Select a template from the list to view details of the template in the right pane.+ 1. Select the ellipses **...** at the end of the row.+ 1. Select **Download Schema**.
- :::image type="content" source="./media/watchlists-create/create-watchlist-download-schema.png" alt-text="Screenshot of templates tab with download schema selected.":::
+ :::image type="content" source="./media/watchlists-create/create-watchlist-download-schema.png" alt-text="Screenshot of templates tab with download schema selected.":::
1. Populate your local version of the file and save it locally as a CSV file.+ 1. Follow the steps to [upload watchlist created from a template (Preview)](#upload-watchlist-created-from-a-template-preview). ## Deleted and recreated watchlists in Log Analytics view
If you delete and recreate a watchlist, you might see both the deleted and recre
## Next steps To learn more about Microsoft Sentinel, see the following articles:+ - Learn how to [get visibility into your data and potential threats](get-visibility.md) - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md) - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
The first tab on an incident details page is now the **Timeline**, which shows a
For example: For more information, see [Tutorial: Investigate incidents with Azure Sentinel](investigate-cases.md).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
By default, incident searches run across the **Incident ID**, **Title**, **Tags*
For example: For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
service-bus-messaging Service Bus Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-transactions.md
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
} ```
+To learn more about the `EnableCrossEntityTransactions` property, see the following reference [ServiceBusClientBuilder.enableCrossEntityTransactions Method](/java/api/com.azure.messaging.servicebus.servicebusclientbuilder.enablecrossentitytransactions).
+ ## Timeout A transaction times out after 2 minutes. The transaction timer starts when the first operation in the transaction starts.
spring-cloud Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/connect-managed-identity-to-azure-sql.md
Rebuild the app and deploy it to the Azure Spring Cloud app provisioned in the s
* [How to access Storage blob with managed identity in Azure Spring Cloud](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob) * [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [Authenticate Azure Spring Cloud with Key Vault in GitHub Actions](./github-actions-key-vault.md)
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md
The route tables to which your custom vnet is associated must meet the following
## Next steps
-* [Deploy Application to Azure Spring Cloud in your VNet](https://github.com/microsoft/vnet-in-azure-spring-cloud/blob/master/02-deploy-application-to-azure-spring-cloud-in-your-vnet.md)
-* [Troubleshooting Azure Spring Cloud in VNET](https://github.com/microsoft/vnet-in-azure-spring-cloud/blob/master/05-troubleshooting-azure-spring-cloud-in-vnet.md)
-* [Customer Responsibilities for Running Azure Spring Cloud in VNET](https://github.com/microsoft/vnet-in-azure-spring-cloud/blob/master/06-customer-responsibilities-for-running-azure-spring-cloud-in-vnet.md)
+* [Troubleshooting Azure Spring Cloud in VNET](troubleshooting-vnet.md)
+* [Customer Responsibilities for Running Azure Spring Cloud in VNET](vnet-customer-responsibilities.md)
spring-cloud How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-apm-java-agent-monitor.md
This article explains how to use Elastic APM Agent to monitor Spring Boot applic
With the Elastic Observability Solution, you can achieve unified observability to: * Monitor apps using the Elastic APM Java Agent and using persistent storage with Azure Spring Cloud.
-* Use diagnostic settings to ship Azure Spring Cloud logs to Elastic. For more information, see [Analyze logs with Elastic (ELK) using diagnostics settings](https://github.com/hemantmalik/azure-docs/blob/master/articles/spring-cloud/how-to-elastic-diagnostic-settings.md).
+* Use diagnostic settings to ship Azure Spring Cloud logs to Elastic. For more information, see [Analyze logs with Elastic (ELK) using diagnostics settings](how-to-elastic-diagnostic-settings.md).
The following video introduces unified observability for Spring Boot applications using Elastic.
spring-cloud How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-system-assigned-managed-identity.md
az spring-cloud app identity remove \
## Next steps
-* [Access Azure Key Vault with managed identities in Spring boot starter](https://github.com/Azure/azure-sdk-for-jav#use-msi--managed-identities)
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-cloud How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-manage-user-assigned-managed-identities.md
For user-assigned managed identity limitations, see [Quotas and service plans fo
## Next steps
-* [Access Azure Key Vault with managed identities in Spring boot starter](https://github.com/Azure/azure-sdk-for-jav#use-msi--managed-identities)
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-cloud How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-outbound-public-ip.md
az spring-cloud show --resource-group <group_name> --name <service_name> --query
## Next steps
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [Learn more about key vault in Azure Spring Cloud](./tutorial-managed-identities-key-vault.md)
spring-cloud Quickstart Provision Service Instance Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance-enterprise.md
Use the following steps to provision an Azure Spring Cloud service instance:
- Give a **Sampling Rate** with in the range of 0-100, or use the default value 10. > [!NOTE]
- > You'll pay for the usage of Application Insights when integrated with Azure Spring Cloud. For more information about Application Insights pricing, see [Manage usage and costs for Application Insights](../azure-monitor/app/pricing.md).
+ > You'll pay for the usage of Application Insights when integrated with Azure Spring Cloud. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
:::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
spring-cloud Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-functions.md
This sample will invoke the Http triggered function by first requesting an acces
## Next steps * [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [Configure client apps to access your App Service](../app-service/configure-authentication-provider-aad.md#configure-client-apps-to-access-your-app-service)
spring-cloud Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-key-vault.md
This article shows you how to create a managed identity for an Azure Spring Cloud app and use it to access Azure Key Vault.
-Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets for your app. You can create a managed identity in Azure Active Directory (AAD), and authenticate to any service that supports AAD authentication, including Key Vault, without having to display credentials in your code.
+Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets for your app. You can create a managed identity in Azure Active Directory (Azure AD), and authenticate to any service that supports Azure AD authentication, including Key Vault, without having to display credentials in your code.
The following video describes how to manage secrets using Azure Key Vault.
This app will have access to get secrets from Azure Key Vault. Use the starter a
} ```
- If you open pom.xml, you will see the dependency of `azure-keyvault-secrets-spring-boot-starter`. Add this dependency to your project in pom.xml.
+ If you open the *pom.xml* file, you'll see the dependency `azure-keyvault-secrets-spring-boot-starter`. Add this dependency to your project in the *pom.xml* file.
```xml <dependency>
This app will have access to get secrets from Azure Key Vault. Use the starter a
curl https://myspringcloud-springapp.azuremicroservices.io/get ```
- You will see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ you'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
## Build sample Spring Boot app with Java SDK
-This sample can set and get secrets from Azure Key Vault. The [Azure Key Vault Secret client library for Java](/java/api/overview/azure/security-keyvault-secrets-readme) provides Azure Active Directory token authentication support across the Azure SDK. It provides a set of `TokenCredential` implementations that can be used to construct Azure SDK clients to support AAD token authentication.
+This sample can set and get secrets from Azure Key Vault. The [Azure Key Vault Secret client library for Java](/java/api/overview/azure/security-keyvault-secrets-readme) provides Azure Active Directory token authentication support across the Azure SDK. It provides a set of `TokenCredential` implementations that can be used to construct Azure SDK clients to support Azure AD token authentication.
The Azure Key Vault Secret client library allows you to securely store and control the access to tokens, passwords, API keys, and other secrets. The library offers operations to create, retrieve, update, delete, purge, back up, restore, and list the secrets and its versions.
The Azure Key Vault Secret client library allows you to securely store and contr
Get the example from [MainController.java](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/src/main/java/com/microsoft/azure/MainController.java#L28) of the cloned sample project.
- Also include `azure-identity` and `azure-security-keyvault-secrets` as dependency in your pom.xml. Get the example from [pom.xml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/pom.xml#L21) of the cloned sample project.
+ Also include `azure-identity` and `azure-security-keyvault-secrets` as dependencies in your *pom.xml* file. Get the example from [pom.xml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/pom.xml#L21) of the cloned sample project.
4. Package your sample app.
The Azure Key Vault Secret client library allows you to securely store and contr
curl https://myspringcloud-springapp.azuremicroservices.io/secrets/connectionString ```
- You will see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ you'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
Now create a secret and then retrieve it using the Java SDK.
The Azure Key Vault Secret client library allows you to securely store and contr
curl https://myspringcloud-springapp.azuremicroservices.io/secrets/test ```
- You will see the message `Successfully got the value of secret test from Key Vault https://<your-keyvault-name>.vault.azure.net: success`.
+ you'll see the message `Successfully got the value of secret test from Key Vault https://<your-keyvault-name>.vault.azure.net: success`.
## Next steps * [How to access Storage blob with managed identity in Azure Spring Cloud](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob) * [How to enable system-assigned managed identity for applications in Azure Spring Cloud](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
* [Authenticate Azure Spring Cloud with Key Vault in GitHub Actions](./github-actions-key-vault.md)
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
If version-level immutability support is enabled for a container and the contain
#### Migrate an existing container to support version-level immutability
-To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and cannot be reversed. You can migrate only one container at a time per storage account.
+To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and cannot be reversed. You can migrate ten containers at a time per storage account.
To migrate an existing container to support version-level immutability policies, the container must have a container-level time-based retention policy configured. The migration fails unless the container has an existing policy. The retention interval for the container-level policy is maintained as the retention interval for the default version-level policy on the container.
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support pr
When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+>
+> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To enroll in the preview, see [this form](https://forms.office.com/r/gZguN0j65Y).
## Valid host keys
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Blob Storage. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+>
+> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
## Known unsupported clients
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- SSH commands, that are not SFTP, are not supported.
+- West Europe will temporarily still require registration of the SFTP preview feature.
+ ## Troubleshooting - To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' is not allowed for property isSftpEnabled.` error, ensure that the following pre-requisites are met at the storage account level:
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- The account needs to have hierarchical namespace enabled on it.
- - Customer's subscription needs to be signed up for the preview. Request to join via 'Preview features' in the Azure portal. Requests are automatically approved.
+ - Accounts in West Europe will temporarily require the customer's subscription to be signed up for the preview. Request to join via 'Preview features' in the Azure portal. Requests are automatically approved.
- To resolve the `Home Directory not accessible error.` error, check that:
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
# SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview)
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This article contains recommendations that will help you to optimize the performance of your storage requests. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This article contains recommendations that will help you to optimize the performance of your storage requests. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
+
+> [!IMPORTANT]
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+>
+> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
+>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Use concurrent connections to increase throughput
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
You can securely connect to the Blob Storage endpoint of an Azure Storage accoun
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md). > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
>
+> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
+>
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To enroll in the preview, see [this form](https://forms.office.com/r/gZguN0j65Y).
## Prerequisites
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer
- If you're connecting from an on-premises network, make sure that your client allows outgoing communication through port 22 used by SFTP.
-## Register the feature
-
-Before you can enable SFTP support, you must register the SFTP feature with your subscription.
-
-### [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Open the configuration page of your subscription.
-
-3. Under **Settings**, select **Preview features**.
-
- > [!div class="mx-imgBorder"]
- > ![Preview setting](./media/secure-file-transfer-protocol-support-how-to/preview-features-setting.png)
-
-4. In the **Preview features** page, select the **AllowSFTP** feature, and then select **Register**.
-
-### [PowerShell](#tab/powershell)
-
-1. Open a Windows PowerShell command window.
-
-2. Install **Az.Storage** preview module.
-
- ```powershell
- Install-Module -Name Az.Storage -AllowPrerelease
- ```
-
- For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
-
-3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-4. If your identity is associated with more than one subscription, then set your active subscription.
-
- ```powershell
- $context = Get-AzSubscription -SubscriptionId <subscription-id>
- Set-AzContext $context
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-5. Register the `AllowSFTP` feature by using the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
-
- ```powershell
- Register-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName AllowSFTP
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. Install the `storage-preview` extension.
-
- ```azurecli
- az extension add -n storage-preview
- ```
-
-2. If you're using Azure CLI locally, run the login command.
-
- ```azurecli
- az login
- ```
-
- If the CLI can open your default browser, it will do so and load an Azure sign-in page.
-
- Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal. Then, sign in with your account credentials in the browser.
-
-1. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account.
-
- ```azurecli
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-4. Register the `AllowSFTP` feature by using the [az feature register](/cli/azure/feature#az-feature-register) command.
-
- ```azurecli
- az feature register --namespace Microsoft.Storage --name AllowSFTP
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
---
-### Verify feature registration
-
-Verify that the feature is registered before continuing with the other steps in this article.
-
-#### [Portal](#tab/azure-portal)
-
-1. Open the **Preview features** page of your subscription.
-
-2. Locate the **AllowSFTP** feature and make sure that **Registered** appears in the **State** column.
-
-#### [PowerShell](#tab/powershell)
-
-To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
-
-```powershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName AllowSFTP
-```
-
-#### [Azure CLI](#tab/azure-cli)
-
-To verify that the registration is complete, use the [az feature](/cli/azure/feature#az-feature-show) command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage --name AllowSFTP
-```
--- ## Enable SFTP support This section shows you how to enable SFTP support for an existing storage account. To view an Azure Resource Manager template that enables SFTP support as part of creating the account, see [Create an Azure Storage Account and Blob Container accessible using SFTP protocol on Azure](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-sftp).
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
> [!div class="mx-imgBorder"] > ![Container permissions tab](./media/secure-file-transfer-protocol-support-how-to/container-perm-tab.png)
-6. In the **Home directory** edit box, type the name of the container or the directory path (including the container name) that will be the default location associated with this this local user.
+6. In the **Home directory** edit box, type the name of the container or the directory path (including the container name) that will be the default location associated with this local user.
To learn more about the home directory, see [Home directory](secure-file-transfer-protocol-support.md#home-directory).
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
> You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you chose to generate a new key pair, then you'll be prompted to download the private key of that key pair after the local user has been added.
+
+ > [!NOTE]
+ > Local users have a `sharedKey` property that is used for SMB authentication only.
### [PowerShell](#tab/powershell)
-1. Decide which containers you want to make available to the local user and the types of operations that you want to enable this local user to perform. Create a permission scope object by using the the **New-AzStorageLocalUserPermissionScope** command, and setting the `-Permission` parameter of that command to one or more letters that correspond to access permission levels. Possible values are Read(r), Write (w), Delete (d), List (l), and Create (c).
+1. Decide which containers you want to make available to the local user and the types of operations that you want to enable this local user to perform. Create a permission scope object by using the **New-AzStorageLocalUserPermissionScope** command, and setting the `-Permission` parameter of that command to one or more letters that correspond to access permission levels. Possible values are Read(r), Write (w), Delete (d), List (l), and Create (c).
- The following example sets creates a permission scope object that gives read and write permission to the `mycontainer` container.
+ The following example set creates a permission scope object that gives read and write permission to the `mycontainer` container.
```powershell $permissionScope = New-AzStorageLocalUserPermissionScope -Permission rw -Service blob -ResourceName mycontainer
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
$localuser.SshAuthorizedKeys | ft $localuser.PermissionScopes | ft ```
+ > [!NOTE]
+ > Local users also have a `sharedKey` property that is used for SMB authentication only.
5. If you want to use a password to authenticate the user, you can create a password by using the **New-AzStorageLocalUserSshPassword** command. Set the `-UserName` parameter to the user name.
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
```azurecli az storage account local-user create --account-name contosoaccount -g contoso-resource-group -n contosouser --home-directory contosocontainer --permission-scope permissions=rw service=blob resource-name=contosocontainer --ssh-authorized-key key="ssh-rsa ssh-rsa a2V5..." --has-ssh-key true --has-ssh-password true ```
+ > [!NOTE]
+ > Local users also have a `sharedKey` property that is used for SMB authentication only.
3. If you want to use a password to authenticate the user, you can create a password by using the [az storage account local-user regenerate-password](/cli/azure/storage/account/local-user#az-storage-account-local-user-regenerate-password) command. Set the `-n` parameter to the local user name. The following example generates a password for the user.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. > [!IMPORTANT]
-> SFTP support currently is in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
>
+> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
+>
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
Azure allows secure data transfer to Blob Storage accounts using Azure Blob service REST API, Azure SDKs, and tools such as AzCopy. However, legacy workloads often use traditional file transfer protocols such as SFTP. You could update custom applications to use the REST API and Azure SDKs, but only by making significant code changes.
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Each time you access data in your storage account, your client application makes
The following table describes the options that Azure Storage offers for authorizing access to data:
-| Azure artifact | Shared Key (storage account key) | Shared access signature (SAS) | Azure Active Directory (Azure AD) | On-premises Active Directory Domain Services | Anonymous public read access |
-|--|--|--|--|--|--|
-| Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported](../blobs/anonymous-read-access-configure.md) |
-| Azure Files (SMB) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | Not supported | [Supported, only with AAD Domain Services](../files/storage-files-active-directory-overview.md) | [Supported, credentials must be synced to Azure AD](../files/storage-files-active-directory-overview.md) | Not supported |
-| Azure Files (REST) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | Not supported | Not supported | Not supported |
-| Azure Queues | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../queues/authorize-access-azure-active-directory.md) | Not Supported | Not supported |
-| Azure Tables | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../tables/authorize-access-azure-active-directory.md) | Not supported | Not supported |
+| Azure artifact | Shared Key (storage account key) | Shared access signature (SAS) | Azure Active Directory (Azure AD) | On-premises Active Directory Domain Services | Anonymous public read access | Storage Local Users |
+|--|--|--|--|--|--|--|
+| Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported](../blobs/anonymous-read-access-configure.md) | [Supported, only for SFTP](../blobs/secure-file-transfer-protocol-support-how-to.md) |
+| Azure Files (SMB) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | Not supported | [Supported, only with AAD Domain Services](../files/storage-files-active-directory-overview.md) | [Supported, credentials must be synced to Azure AD](../files/storage-files-active-directory-overview.md) | Not supported | Supported |
+| Azure Files (REST) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | Not supported | Not supported | Not supported | Not supported |
+| Azure Queues | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../queues/authorize-access-azure-active-directory.md) | Not Supported | Not supported | Not supported |
+| Azure Tables | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../tables/authorize-access-azure-active-directory.md) | Not supported | Not supported | Not supported |
Each authorization option is briefly described below:
Each authorization option is briefly described below:
- **Anonymous public read access** for containers and blobs. When anonymous access is configured, then clients can read blob data without authorization. For more information, see [Manage anonymous read access to containers and blobs](../blobs/anonymous-read-access-configure.md). You can disallow anonymous public read access for a storage account. When anonymous public read access is disallowed, then users cannot configure containers to enable anonymous access, and all requests must be authorized. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).
+
+- **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP.
## Next steps - Authorize access with Azure Active Directory to either [blob](../blobs/authorize-access-azure-active-directory.md), [queue](../queues/authorize-access-azure-active-directory.md), or [table](../tables/authorize-access-azure-active-directory.md) resources. - [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key/) - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)
-
+
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
The following table provides an overview of how to switch from each type of repl
<sup>2</sup> Migrating from LRS to GRS is not supported if the storage account contains blobs in the archive tier.<br /> <sup>3</sup> Live migration is supported for standard general-purpose v2 and premium file share storage accounts. Live migration is not supported for premium block blob or page blob storage accounts.<br /> <sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
-<sup>5</sup> Migrating from LRS to ZRS is not supported if the storage account contains Azure Files NFSv4.1 shares. <br />
+<sup>5</sup> Migrating from LRS to ZRS is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares. <br />
> [!CAUTION] > If you performed an [account failover](storage-disaster-recovery-guidance.md) for your (RA-)GRS or (RA-)GZRS account, the account is locally redundant (LRS) in the new primary region after the failover. Live migration to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GZRS to the LRS in the secondary region, and then configure it again to RA-GRS and perform another account failover to the original primary region, you can't contact support for the original live migration to RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to ZRS or GZRS.
virtual-desktop Azure Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-costs.md
To learn about managing rights and permissions to the workbook, see [Access cont
Here are some suggestions to optimize your Log Analytics settings to manage data ingestion: - Use a designated Log Analytics workspace for your Azure Virtual Desktop resources to ensure that Log Analytics only collects performance counters and events for the virtual machines in your Azure Virtual Desktop deployment.-- Adjust your Log Analytics storage settings to manage costs. You can reduce the retention period, evaluate whether a fixed storage pricing tier would be more cost-effective, or set boundaries on how much data you can ingest to limit impact of an unhealthy deployment. To learn more, see [Manage usage and costs for Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md).
+- Adjust your Log Analytics storage settings to manage costs. You can reduce the retention period, evaluate whether a fixed storage pricing tier would be more cost-effective, or set boundaries on how much data you can ingest to limit impact of an unhealthy deployment. To learn more, see [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
### Remove excess data
Our default configuration is the only set of data we recommend for Azure Monitor
### Measure and manage your performance counter data
-Your true monitoring costs will depend on your environment size, usage, and health. To understand how to measure data ingestion in your Log Analytics workspace, see [Understanding ingested log data volume](../azure-monitor/logs/manage-cost-storage.md#understanding-ingested-data-volume).
+Your true monitoring costs will depend on your environment size, usage, and health. To understand how to measure data ingestion in your Log Analytics workspace, see [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
The performance counters the session hosts use will probably be your largest source of ingested data for Azure Monitor for Azure Virtual Desktop. The following custom query template for a Log Analytics workspace can track frequency and megabytes ingested per performance counter over the last day:
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
The following table summarizes identity scenarios that Azure Virtual Desktop cur
> [!IMPORTANT] > The user account must exist in the Azure AD tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts. >
-> The [UserPrincipalName (UPN)](../active-directory/hybrid/plan-connect-userprincipalname.md) you use to subscribe to Azure Virtual Desktop must exist in the Active Directory domain you're joining the session host to.
+> When using hybrid identities, either the UserPrincipalName (UPN) or the Security Identifier (SID) must match across Active Directory Domain Services and Azure Active Directory. For more information, see [Supported identities and authentication methods](authentication.md#hybrid-identity).
### Deployment parameters
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| RBAC Permissions Required | Compute VMSS Write, Compute VM Write, Network | Compute VMSS Write | N/A | | Accelerated networking | Yes | Yes | Yes | | Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | Yes, instances must either be all Spot or all Regular | No, Regular priority instances only |
-| Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same Flexible scale set |
+| Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same availability set |
| Disk Types | Managed disks only, all storage types | Managed and unmanaged disks, all storage types | Managed and unmanaged disks, Ultradisk not supported | | Write Accelerator  | No | Yes | Yes | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes |
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv5-ddsv5-series.md
Ddv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
||||||||| | Standard_D2d_v5<sup>1,2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 2 | 12500 | | Standard_D4d_v5 | 4 | 16 | 150 | 8 | 19000/250 | 2 | 12500 |
Ddsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
<br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
||||||||||| | Standard_D2ds_v5<sup>1,2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4ds_v5 | 4 | 16 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver
The Ebsv5 and Ebdsv5 VMs offer up to 120000 IOPS and 4000 MBps of remote disk storage throughput. Both series also include up to 512 GiB of RAM. The Ebdsv5 series has local SSD storage up to 2400 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
-The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8272CL (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
+The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
- Up to 512 GiB of RAM - [Intel® Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html)
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8272CL (Ice Lake)
## Ebdsv5 series
-Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 512 GiB of RAM, in addition to fast and large local SSD storage (up to 2400 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
+Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 512 GiB of RAM, in addition to fast and large local SSD storage (up to 2400 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
- [Premium Storage](premium-storage-performance.md): Supported - [Premium Storage caching](premium-storage-performance.md): Supported
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake) process
- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported - Nested virtualization: Supported
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported - Nested virtualization: Supported
-| Size | vCPU | Memory: GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
| | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 | | Standard_E4bs_v5 | 4 | 32 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br><br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
||||||||| | Standard_E2d_v5<sup>1,2</sup> | 2 | 16 | 75 | 4 | 9000/125 | 2 | 12500 | | Standard_E4d_v5 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 12500 |
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>5</sup> | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>5</sup> | Max NICs | Max network bandwidth (Mbps) |
||||||||||| | Standard_E2ds_v5<sup>1,2</sup> | 2 | 16 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_E4ds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/overview.md
This table shows some of the ways you can get a list of available locations.
Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9% provided you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the standard 99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your workload inside of an availability set. An availability set ensures that your VMs are distributed across multiple fault domains in the Azure data centers as well as deployed onto hosts with different maintenance windows. The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/) explains the guaranteed availability of Azure as a whole. ## VM Size
-The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of sizes to support many types of uses.
+The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses.
Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) based on the VMΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
Create your first VM!
- [Portal](quick-create-portal.md) - [Azure CLI](quick-create-cli.md)-- [PowerShell](quick-create-powershell.md)
+- [PowerShell](quick-create-powershell.md)
virtual-machines Vm Naming Conventions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-naming-conventions.md
This page outlines the naming conventions used for Azure VMs. VMs use these nami
| *Sub-family | Used for specialized VM differentiations only| | # of vCPUs| Denotes the number of vCPUs of the VM | | *Constrained vCPUs| Used for certain VM sizes only. Denotes the number of vCPUs for the [constrained vCPU capable size](./constrained-vcpu.md) |
-| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> d = diskfull (local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> |
+| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> |
| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 will have the hardware accelerator in the name. | | Version | Denotes the version of the VM Family Series |
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/overview.md
Azure announced an industry leading single instance virtual machine Service Leve
## VM size
-The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of sizes to support many types of uses.
+The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses.
Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) based on the VMΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
Create your first VM!
- [Portal](quick-create-portal.md) - [PowerShell](quick-create-powershell.md)-- [Azure CLI](quick-create-cli.md)
+- [Azure CLI](quick-create-cli.md)
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Azure Monitor for SAP Solutions uses the [Azure Monitor](../../../azure-monitor/
- Create [custom visualizations](../../../azure-monitor/visualize/workbooks-overview.md#getting-started) by editing the default Workbooks provided by Azure Monitor for SAP Solutions. - Write [custom queries](../../../azure-monitor/logs/log-analytics-tutorial.md). - Create [custom alerts](../../../azure-monitor/alerts/alerts-log.md) by using Azure Log Analytics workspace. -- Take advantage of the [flexible retention period](../../../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period) in Azure Monitor Logs/Log Analytics.
+- Take advantage of the [flexible retention period](../../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs/Log Analytics.
- Connect monitoring data with your ticketing system. ## What data does Azure Monitor for SAP Solutions collect?
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Previously updated : 02/04/2022 Last updated : 04/07/2022 # Web Application Firewall DRS rule groups and rules
In Anomaly Scoring mode, traffic that matches any rule isn't immediately blocked
|Warning |3| |Notice |2|
-There's a threshold of 5 for the Anomaly Score to block traffic. So, a single *Critical* rule match is enough for the WAF to block a request, even in Prevention mode. But one *Warning* rule match only increases the Anomaly Score by 3, which isn't enough by itself to block the traffic.
+There's a threshold of 5 for the Anomaly Score to block traffic. So, a single *Critical* rule match is enough for the WAF to block a request, even in Prevention mode. But one *Warning* rule match only increases the Anomaly Score by 3, which isn't enough by itself to block the traffic. For more information, see [What content types does WAF support?](waf-faq.yml#what-content-types-does-waf-support-) in the FAQ to learn what content types are supported for body inspection with different DRS versions.
-> [!NOTE]
-> Body inspection is only available on DRS 2.0
### DRS 2.0